Mar 7 01:59:56.458219 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:59:56.458264 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:59:56.458283 kernel: BIOS-provided physical RAM map: Mar 7 01:59:56.458294 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:59:56.458303 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:59:56.458312 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:59:56.458389 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:59:56.458401 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:59:56.458410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:59:56.458427 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:59:56.458437 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:59:56.458447 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:59:56.458457 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:59:56.458466 kernel: NX (Execute Disable) protection: active Mar 7 01:59:56.458591 kernel: APIC: Static calls initialized Mar 7 01:59:56.458613 kernel: SMBIOS 2.8 present. Mar 7 01:59:56.458624 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:59:56.458635 kernel: Hypervisor detected: KVM Mar 7 01:59:56.458645 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:59:56.458655 kernel: kvm-clock: using sched offset of 14811918863 cycles Mar 7 01:59:56.458667 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:59:56.458678 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:59:56.458689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:59:56.458701 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:59:56.458717 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:59:56.458728 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:59:56.458739 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:59:56.458821 kernel: Using GB pages for direct mapping Mar 7 01:59:56.458833 kernel: ACPI: Early table checksum verification disabled Mar 7 01:59:56.458844 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:59:56.458855 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458866 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458877 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458893 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:59:56.458904 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458915 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458926 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458937 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:59:56.458947 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:59:56.458958 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:59:56.458975 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:59:56.458991 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:59:56.459003 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:59:56.459015 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:59:56.459026 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:59:56.459037 kernel: No NUMA configuration found Mar 7 01:59:56.459048 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:59:56.459063 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:59:56.459074 kernel: Zone ranges: Mar 7 01:59:56.459086 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:59:56.459097 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:59:56.459108 kernel: Normal empty Mar 7 01:59:56.459119 kernel: Movable zone start for each node Mar 7 01:59:56.459131 kernel: Early memory node ranges Mar 7 01:59:56.459142 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:59:56.459153 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:59:56.459164 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:59:56.459181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:59:56.459192 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:59:56.459203 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:59:56.459214 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:59:56.459225 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:59:56.459236 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:59:56.459248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:59:56.459259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:59:56.459270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:59:56.459287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:59:56.459299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:59:56.459310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:59:56.459322 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:59:56.459333 kernel: TSC deadline timer available Mar 7 01:59:56.459344 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:59:56.459355 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:59:56.459366 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:59:56.459378 kernel: kvm-guest: setup PV sched yield Mar 7 01:59:56.459394 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:59:56.459406 kernel: Booting paravirtualized kernel on KVM Mar 7 01:59:56.459417 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:59:56.459428 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:59:56.459440 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:59:56.459452 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:59:56.459463 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:59:56.459475 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:59:56.459609 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:59:56.459630 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:59:56.459642 kernel: random: crng init done Mar 7 01:59:56.459653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:59:56.459664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:59:56.459676 kernel: Fallback order for Node 0: 0 Mar 7 01:59:56.459687 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:59:56.459698 kernel: Policy zone: DMA32 Mar 7 01:59:56.459710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:59:56.459726 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 01:59:56.459738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:59:56.459821 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:59:56.459836 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:59:56.459848 kernel: Dynamic Preempt: voluntary Mar 7 01:59:56.459860 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:59:56.459875 kernel: rcu: RCU event tracing is enabled. Mar 7 01:59:56.459887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:59:56.459900 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:59:56.459920 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:59:56.459934 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:59:56.459946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:59:56.459958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:59:56.459970 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:59:56.459983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:59:56.459995 kernel: Console: colour VGA+ 80x25 Mar 7 01:59:56.460008 kernel: printk: console [ttyS0] enabled Mar 7 01:59:56.460020 kernel: ACPI: Core revision 20230628 Mar 7 01:59:56.460039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:59:56.460052 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:59:56.460065 kernel: x2apic enabled Mar 7 01:59:56.460077 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:59:56.460090 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:59:56.460103 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:59:56.460114 kernel: kvm-guest: setup PV IPIs Mar 7 01:59:56.460126 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:59:56.460155 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:59:56.460168 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:59:56.460180 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:59:56.460193 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:59:56.460209 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:59:56.460221 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:59:56.460234 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:59:56.460246 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:59:56.460257 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:59:56.460275 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:59:56.460287 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:59:56.460301 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:59:56.460312 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:59:56.460325 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:59:56.460337 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:59:56.460349 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:59:56.460361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:59:56.460379 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:59:56.460390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:59:56.460402 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:59:56.460413 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:59:56.460425 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:59:56.460438 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:59:56.460449 kernel: landlock: Up and running. Mar 7 01:59:56.460461 kernel: SELinux: Initializing. Mar 7 01:59:56.460474 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:59:56.460594 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:59:56.460606 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:59:56.460617 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:59:56.460629 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:59:56.460642 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:59:56.460654 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:59:56.460667 kernel: signal: max sigframe size: 1776 Mar 7 01:59:56.460678 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:59:56.460691 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:59:56.460708 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:59:56.460721 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:59:56.460733 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:59:56.470171 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:59:56.470213 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:59:56.470227 kernel: smpboot: Max logical packages: 1 Mar 7 01:59:56.470238 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:59:56.470248 kernel: devtmpfs: initialized Mar 7 01:59:56.470258 kernel: x86/mm: Memory block size: 128MB Mar 7 01:59:56.470279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:59:56.470290 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:59:56.470301 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:59:56.470313 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:59:56.470327 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:59:56.470339 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:59:56.470353 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:59:56.470366 kernel: audit: type=2000 audit(1772848783.308:1): state=initialized audit_enabled=0 res=1 Mar 7 01:59:56.470379 kernel: cpuidle: using governor menu Mar 7 01:59:56.470399 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:59:56.470411 kernel: dca service started, version 1.12.1 Mar 7 01:59:56.470425 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:59:56.470439 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:59:56.470454 kernel: PCI: Using configuration type 1 for base access Mar 7 01:59:56.470467 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:59:56.470594 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:59:56.470608 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:59:56.470618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:59:56.470634 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:59:56.470647 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:59:56.470660 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:59:56.470673 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:59:56.470686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:59:56.470697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:59:56.470708 kernel: ACPI: Interpreter enabled Mar 7 01:59:56.470720 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:59:56.470732 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:59:56.470824 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:59:56.470841 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:59:56.470854 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:59:56.470869 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:59:56.471287 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:59:56.471630 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:59:56.471903 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:59:56.471931 kernel: PCI host bridge to bus 0000:00 Mar 7 01:59:56.472127 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:59:56.472306 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:59:56.472604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:59:56.472867 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:59:56.473049 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:59:56.473219 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:59:56.473395 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:59:56.473734 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:59:56.474455 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:59:56.474844 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:59:56.475058 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:59:56.475267 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:59:56.475599 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:59:56.475908 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:59:56.476091 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:59:56.476258 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:59:56.476413 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:59:56.476724 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:59:56.476961 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:59:56.477151 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:59:56.477358 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:59:56.477661 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:59:56.483879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:59:56.484112 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:59:56.484310 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:59:56.486696 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:59:56.487033 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:59:56.487252 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:59:56.487461 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:59:56.487856 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:59:56.488081 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:59:56.488309 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:59:56.488606 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:59:56.488638 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:59:56.488652 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:59:56.488663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:59:56.488676 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:59:56.488688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:59:56.488699 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:59:56.488712 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:59:56.488722 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:59:56.488735 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:59:56.495146 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:59:56.495166 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:59:56.495181 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:59:56.495194 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:59:56.495209 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:59:56.495222 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:59:56.495236 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:59:56.495249 kernel: iommu: Default domain type: Translated Mar 7 01:59:56.495263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:59:56.495296 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:59:56.495310 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:59:56.495323 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:59:56.495337 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:59:56.495730 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:59:56.495985 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:59:56.496254 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:59:56.496270 kernel: vgaarb: loaded Mar 7 01:59:56.496296 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:59:56.496307 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:59:56.496319 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:59:56.496330 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:59:56.496342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:59:56.496354 kernel: pnp: PnP ACPI init Mar 7 01:59:56.501863 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:59:56.501904 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:59:56.501933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:59:56.501948 kernel: NET: Registered PF_INET protocol family Mar 7 01:59:56.501962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:59:56.501976 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:59:56.501990 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:59:56.502005 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:59:56.502019 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:59:56.502032 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:59:56.502044 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:59:56.502065 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:59:56.502079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:59:56.502090 kernel: NET: Registered PF_XDP protocol family Mar 7 01:59:56.502279 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:59:56.508032 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:59:56.508259 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:59:56.508457 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:59:56.508818 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:59:56.509049 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:59:56.509072 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:59:56.509086 kernel: Initialise system trusted keyrings Mar 7 01:59:56.509100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:59:56.509114 kernel: Key type asymmetric registered Mar 7 01:59:56.509127 kernel: Asymmetric key parser 'x509' registered Mar 7 01:59:56.509138 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:59:56.509148 kernel: io scheduler mq-deadline registered Mar 7 01:59:56.509158 kernel: io scheduler kyber registered Mar 7 01:59:56.509176 kernel: io scheduler bfq registered Mar 7 01:59:56.509186 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:59:56.509197 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:59:56.509208 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:59:56.509221 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:59:56.509234 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:59:56.509247 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:59:56.509261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:59:56.509273 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:59:56.509289 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:59:56.509614 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:59:56.509639 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:59:56.512434 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:59:56.512693 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:59:51 UTC (1772848791) Mar 7 01:59:56.512958 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:59:56.512982 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:59:56.512997 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:59:56.513020 kernel: Segment Routing with IPv6 Mar 7 01:59:56.513031 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:59:56.513042 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:59:56.513053 kernel: Key type dns_resolver registered Mar 7 01:59:56.513063 kernel: IPI shorthand broadcast: enabled Mar 7 01:59:56.513074 kernel: sched_clock: Marking stable (7803054628, 740092277)->(10819543109, -2276396204) Mar 7 01:59:56.513086 kernel: registered taskstats version 1 Mar 7 01:59:56.513096 kernel: Loading compiled-in X.509 certificates Mar 7 01:59:56.513107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:59:56.513121 kernel: Key type .fscrypt registered Mar 7 01:59:56.513132 kernel: Key type fscrypt-provisioning registered Mar 7 01:59:56.513143 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:59:56.513154 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:59:56.513164 kernel: ima: No architecture policies found Mar 7 01:59:56.513175 kernel: clk: Disabling unused clocks Mar 7 01:59:56.513186 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:59:56.513196 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:59:56.513207 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:59:56.513221 kernel: Run /init as init process Mar 7 01:59:56.513232 kernel: with arguments: Mar 7 01:59:56.513243 kernel: /init Mar 7 01:59:56.513253 kernel: with environment: Mar 7 01:59:56.513264 kernel: HOME=/ Mar 7 01:59:56.513274 kernel: TERM=linux Mar 7 01:59:56.513287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:59:56.513301 systemd[1]: Detected virtualization kvm. Mar 7 01:59:56.513315 systemd[1]: Detected architecture x86-64. Mar 7 01:59:56.513326 systemd[1]: Running in initrd. Mar 7 01:59:56.513337 systemd[1]: No hostname configured, using default hostname. Mar 7 01:59:56.513348 systemd[1]: Hostname set to . Mar 7 01:59:56.513360 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:59:56.513371 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:59:56.513382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:59:56.513394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:59:56.513409 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:59:56.513421 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:59:56.513433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:59:56.513445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:59:56.513458 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:59:56.513470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:59:56.513566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:59:56.513582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:59:56.513594 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:59:56.513605 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:59:56.513617 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:59:56.513645 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:59:56.513659 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:59:56.513674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:59:56.513686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:59:56.513697 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:59:56.513709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:59:56.513721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:59:56.513732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:59:56.513807 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:59:56.513824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:59:56.513835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:59:56.513852 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:59:56.513863 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:59:56.513875 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:59:56.513886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:59:56.513898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:59:56.513912 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:59:56.513968 systemd-journald[195]: Collecting audit messages is disabled. Mar 7 01:59:56.514002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:59:56.514015 systemd-journald[195]: Journal started Mar 7 01:59:56.514042 systemd-journald[195]: Runtime Journal (/run/log/journal/68d5a92359d644ca9f7081e2cbb99775) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:59:56.545426 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:59:56.576329 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:59:56.749118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:59:58.554353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:59:58.554407 kernel: Bridge firewalling registered Mar 7 01:59:56.854841 systemd-modules-load[196]: Inserted module 'overlay' Mar 7 01:59:57.798433 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 7 01:59:58.690087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:59:58.721924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:59:58.815661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:59:58.851618 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:59:59.080914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:59:59.112127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:59:59.173119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:59:59.207272 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:59:59.327994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:59:59.358994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:59:59.491985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:59:59.513682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:59:59.701639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:59:59.791743 dracut-cmdline[228]: dracut-dracut-053 Mar 7 01:59:59.791743 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:00:00.216011 systemd-resolved[236]: Positive Trust Anchors: Mar 7 02:00:00.216084 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:00:00.216138 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:00:00.248045 systemd-resolved[236]: Defaulting to hostname 'linux'. Mar 7 02:00:00.250336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:00:00.475835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:00:00.691970 kernel: SCSI subsystem initialized Mar 7 02:00:00.761011 kernel: Loading iSCSI transport class v2.0-870. Mar 7 02:00:00.849608 kernel: iscsi: registered transport (tcp) Mar 7 02:00:01.050272 kernel: iscsi: registered transport (qla4xxx) Mar 7 02:00:01.050369 kernel: QLogic iSCSI HBA Driver Mar 7 02:00:01.641328 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 02:00:01.714019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 02:00:02.002071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 02:00:02.002163 kernel: device-mapper: uevent: version 1.0.3 Mar 7 02:00:02.010637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 02:00:02.175645 kernel: raid6: avx2x4 gen() 13848 MB/s Mar 7 02:00:02.201026 kernel: raid6: avx2x2 gen() 12616 MB/s Mar 7 02:00:02.223694 kernel: raid6: avx2x1 gen() 7272 MB/s Mar 7 02:00:02.223834 kernel: raid6: using algorithm avx2x4 gen() 13848 MB/s Mar 7 02:00:02.254424 kernel: raid6: .... xor() 2687 MB/s, rmw enabled Mar 7 02:00:02.254581 kernel: raid6: using avx2x2 recovery algorithm Mar 7 02:00:02.318294 kernel: xor: automatically using best checksumming function avx Mar 7 02:00:03.568009 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 02:00:03.656151 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:00:03.782151 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:00:03.907122 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 7 02:00:03.956268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:00:04.089721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 02:00:04.220629 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Mar 7 02:00:04.473018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:00:04.526896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:00:04.875112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:00:04.955404 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 02:00:05.101973 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 02:00:05.157904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:00:05.196653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:00:05.198111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:00:05.384145 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 02:00:05.505044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:00:05.505274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:05.526016 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:00:05.526079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:00:05.526472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:05.526693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:00:05.809092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:00:05.891264 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:00:06.071674 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 02:00:06.119711 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 02:00:06.119875 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 02:00:06.160013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 02:00:06.160098 kernel: GPT:9289727 != 19775487 Mar 7 02:00:06.160118 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 02:00:06.160133 kernel: GPT:9289727 != 19775487 Mar 7 02:00:06.160148 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 02:00:06.160162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:07.095004 kernel: libata version 3.00 loaded. Mar 7 02:00:07.320942 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Mar 7 02:00:07.441178 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 02:00:07.554847 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Mar 7 02:00:07.592851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:07.852244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:00:08.009751 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 02:00:08.009874 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 02:00:08.010257 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 02:00:08.051050 kernel: AES CTR mode by8 optimization enabled Mar 7 02:00:08.140896 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 02:00:08.141689 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 02:00:08.189096 kernel: scsi host0: ahci Mar 7 02:00:08.188620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 02:00:08.330196 kernel: scsi host1: ahci Mar 7 02:00:08.330949 kernel: scsi host2: ahci Mar 7 02:00:08.331174 kernel: scsi host3: ahci Mar 7 02:00:08.331384 kernel: scsi host4: ahci Mar 7 02:00:08.301836 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 02:00:08.357754 kernel: scsi host5: ahci Mar 7 02:00:08.358307 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 02:00:08.386238 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 02:00:08.386319 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 02:00:08.393651 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 02:00:08.410956 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 02:00:08.411035 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 02:00:08.449845 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 02:00:08.502251 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 02:00:08.710054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:00:08.797073 disk-uuid[553]: Primary Header is updated. Mar 7 02:00:08.797073 disk-uuid[553]: Secondary Entries is updated. Mar 7 02:00:08.797073 disk-uuid[553]: Secondary Header is updated. Mar 7 02:00:08.854701 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:08.854737 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 02:00:08.854755 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:08.854846 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:08.870197 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:08.870270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:08.871602 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:08.875828 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 02:00:08.902387 kernel: ata3.00: applying bridge limits Mar 7 02:00:08.938890 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:09.094658 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:09.094708 kernel: ata3.00: configured for UDMA/100 Mar 7 02:00:09.094738 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 02:00:09.627601 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 02:00:09.628199 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 02:00:09.698594 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 02:00:09.954729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:09.959167 disk-uuid[556]: The operation has completed successfully. Mar 7 02:00:10.708915 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 02:00:10.709220 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 02:00:10.852693 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 02:00:10.923682 sh[598]: Success Mar 7 02:00:11.162064 kernel: hrtimer: interrupt took 16742828 ns Mar 7 02:00:11.472599 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 02:00:11.882306 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 02:00:11.900990 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 02:00:11.953174 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 02:00:12.006678 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 02:00:12.018242 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:12.018439 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 02:00:12.045741 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 02:00:12.045891 kernel: BTRFS info (device dm-0): using free space tree Mar 7 02:00:12.169077 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 02:00:12.221193 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 02:00:12.318445 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 02:00:12.596657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 02:00:12.688048 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:12.688129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:12.688153 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:00:12.778975 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:00:13.016383 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 02:00:13.096182 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:13.241674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 02:00:13.405868 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 02:00:14.850670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:00:14.938211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:00:15.402015 ignition[710]: Ignition 2.19.0 Mar 7 02:00:15.402095 ignition[710]: Stage: fetch-offline Mar 7 02:00:15.402383 ignition[710]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:15.402401 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:15.484774 ignition[710]: parsed url from cmdline: "" Mar 7 02:00:15.489860 ignition[710]: no config URL provided Mar 7 02:00:15.495867 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 02:00:15.495897 ignition[710]: no config at "/usr/lib/ignition/user.ign" Mar 7 02:00:15.496074 ignition[710]: op(1): [started] loading QEMU firmware config module Mar 7 02:00:15.563650 systemd-networkd[785]: lo: Link UP Mar 7 02:00:15.496085 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 02:00:15.563657 systemd-networkd[785]: lo: Gained carrier Mar 7 02:00:15.575603 systemd-networkd[785]: Enumeration completed Mar 7 02:00:15.577142 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:00:15.583443 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:00:15.583449 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:00:15.603028 systemd-networkd[785]: eth0: Link UP Mar 7 02:00:15.603035 systemd-networkd[785]: eth0: Gained carrier Mar 7 02:00:15.603051 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:00:15.748154 systemd[1]: Reached target network.target - Network. Mar 7 02:00:15.766594 ignition[710]: op(1): [finished] loading QEMU firmware config module Mar 7 02:00:15.829769 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:00:16.053421 ignition[710]: parsing config with SHA512: 52c723e93418a32e98e6fc53d1f25f1c58012b76ac824152909dabbd8933547b81f67995c403779e4fd9062441fb0a461608e40da3a2ba92b2d84bbc368a1b7a Mar 7 02:00:16.283061 unknown[710]: fetched base config from "system" Mar 7 02:00:16.283141 unknown[710]: fetched user config from "qemu" Mar 7 02:00:16.306735 ignition[710]: fetch-offline: fetch-offline passed Mar 7 02:00:16.306985 ignition[710]: Ignition finished successfully Mar 7 02:00:16.341691 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:00:16.360752 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 02:00:16.392005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 02:00:16.531152 ignition[791]: Ignition 2.19.0 Mar 7 02:00:16.531165 ignition[791]: Stage: kargs Mar 7 02:00:16.538218 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:16.538240 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:16.565012 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 02:00:16.550665 ignition[791]: kargs: kargs passed Mar 7 02:00:16.550759 ignition[791]: Ignition finished successfully Mar 7 02:00:16.648250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 02:00:16.686051 systemd-networkd[785]: eth0: Gained IPv6LL Mar 7 02:00:17.147460 ignition[799]: Ignition 2.19.0 Mar 7 02:00:17.153192 ignition[799]: Stage: disks Mar 7 02:00:17.158760 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:17.158783 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:17.219886 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 02:00:17.163724 ignition[799]: disks: disks passed Mar 7 02:00:17.163879 ignition[799]: Ignition finished successfully Mar 7 02:00:17.262754 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 02:00:17.453256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 02:00:17.507080 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:00:17.555048 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:00:17.570449 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:00:17.686096 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 02:00:17.828936 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 02:00:17.848154 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 02:00:17.944125 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 02:00:19.079343 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 02:00:19.090035 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 02:00:19.118756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 02:00:19.211763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:00:19.283625 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 02:00:19.306324 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 02:00:19.306397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 02:00:19.306434 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:00:19.381765 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 02:00:19.402271 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 7 02:00:19.445150 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:19.445353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:19.449873 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:00:19.546026 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:00:19.559161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:00:19.619011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 02:00:20.032188 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 02:00:20.112785 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 7 02:00:20.155362 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 02:00:20.202425 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 02:00:21.921264 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 02:00:21.978454 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 02:00:22.070364 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:22.035184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 02:00:22.069402 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 02:00:22.390604 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 02:00:22.456046 ignition[931]: INFO : Ignition 2.19.0 Mar 7 02:00:22.456046 ignition[931]: INFO : Stage: mount Mar 7 02:00:22.481389 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:22.481389 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:22.481389 ignition[931]: INFO : mount: mount passed Mar 7 02:00:22.481389 ignition[931]: INFO : Ignition finished successfully Mar 7 02:00:22.477876 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 02:00:22.619366 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 02:00:22.707214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:00:22.908333 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 7 02:00:22.950041 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:22.995708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:22.995798 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:00:23.137199 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:00:23.182754 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:00:23.525379 ignition[961]: INFO : Ignition 2.19.0 Mar 7 02:00:23.525379 ignition[961]: INFO : Stage: files Mar 7 02:00:23.525379 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:23.525379 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:23.669388 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 7 02:00:23.669388 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 02:00:23.669388 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 02:00:23.842125 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 02:00:23.882340 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 02:00:23.927448 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 02:00:23.889174 unknown[961]: wrote ssh authorized keys file for user: core Mar 7 02:00:23.992351 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:00:23.992351 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 02:00:24.348334 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 02:00:26.610966 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:00:26.688889 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 02:00:27.150141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 02:00:35.168998 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:00:35.181317 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 02:00:35.204955 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:00:35.246185 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:00:35.246185 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 02:00:35.246185 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 02:00:35.352468 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:00:35.352468 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:00:35.352468 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 02:00:35.352468 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 02:00:35.825933 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:00:35.874902 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:00:35.874902 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 02:00:35.874902 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 02:00:35.874902 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 02:00:35.997139 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:00:35.997139 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:00:35.997139 ignition[961]: INFO : files: files passed Mar 7 02:00:35.997139 ignition[961]: INFO : Ignition finished successfully Mar 7 02:00:35.949639 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 02:00:36.094695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 02:00:36.139641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 02:00:36.175749 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 02:00:36.176371 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 02:00:36.291937 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 02:00:36.308058 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:00:36.308058 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:00:36.306984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:00:36.402171 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:00:36.340440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 02:00:36.416582 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 02:00:36.614720 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 02:00:36.618261 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 02:00:36.704104 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 02:00:36.717561 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 02:00:36.723184 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 02:00:36.839104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 02:00:37.018719 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:00:37.082772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 02:00:37.155669 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:00:37.175196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:00:37.205251 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 02:00:37.242998 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 02:00:37.244396 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:00:37.298168 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 02:00:37.333222 systemd[1]: Stopped target basic.target - Basic System. Mar 7 02:00:37.348060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 02:00:37.358094 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:00:37.399555 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 02:00:37.420976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 02:00:37.443676 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:00:37.536296 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 02:00:37.579113 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 02:00:37.586669 systemd[1]: Stopped target swap.target - Swaps. Mar 7 02:00:37.592301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 02:00:37.592617 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:00:37.670259 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:00:37.685393 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:00:37.696662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 02:00:37.697461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:00:37.743916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 02:00:37.744146 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 02:00:37.747983 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 02:00:37.748152 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:00:37.778203 systemd[1]: Stopped target paths.target - Path Units. Mar 7 02:00:37.781401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 02:00:37.798706 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:00:37.942340 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 02:00:37.950158 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 02:00:37.950357 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 02:00:37.950584 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:00:37.951089 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 02:00:37.951582 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:00:37.953082 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 02:00:37.953249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:00:38.041030 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 02:00:38.041247 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 02:00:38.140945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 02:00:38.176121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 02:00:38.176697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:00:38.304675 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 02:00:38.362252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 02:00:38.362693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:00:38.363059 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 02:00:38.363234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:00:38.449200 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 02:00:38.449759 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 02:00:38.557201 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 02:00:38.729616 ignition[1015]: INFO : Ignition 2.19.0 Mar 7 02:00:38.729616 ignition[1015]: INFO : Stage: umount Mar 7 02:00:38.729616 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:38.729616 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:38.729616 ignition[1015]: INFO : umount: umount passed Mar 7 02:00:38.729616 ignition[1015]: INFO : Ignition finished successfully Mar 7 02:00:38.626183 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 02:00:38.626410 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 02:00:38.688124 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 02:00:38.688415 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 02:00:38.812450 systemd[1]: Stopped target network.target - Network. Mar 7 02:00:38.899194 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 02:00:38.901221 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 02:00:38.909175 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 02:00:38.909295 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 02:00:38.912016 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 02:00:38.912091 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 02:00:38.982349 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 02:00:38.988191 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 02:00:39.018029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 02:00:39.018280 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 02:00:39.045119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 02:00:39.074222 systemd-networkd[785]: eth0: DHCPv6 lease lost Mar 7 02:00:39.090287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 02:00:39.100886 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 02:00:39.101144 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 02:00:39.153682 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 02:00:39.153810 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:00:39.213890 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 02:00:39.218021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 02:00:39.220765 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:00:39.279225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:00:39.344813 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 02:00:39.366376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 02:00:39.413431 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 02:00:39.417296 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:00:39.465958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 02:00:39.466108 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 02:00:39.478255 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 02:00:39.478336 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:00:39.496146 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 02:00:39.496246 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:00:39.503057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 02:00:39.503146 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 02:00:39.503292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:00:39.503356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:39.507373 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 02:00:39.662813 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:00:39.663067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:00:39.693646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 02:00:39.693808 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 02:00:39.693986 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 02:00:39.694059 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:00:39.694150 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 02:00:39.694208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:00:39.694409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:00:39.694466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:39.711018 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 02:00:39.718591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 02:00:39.816899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 02:00:39.817107 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 02:00:39.920660 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 02:00:39.983601 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 02:00:40.183038 systemd[1]: Switching root. Mar 7 02:00:40.311123 systemd-journald[195]: Journal stopped Mar 7 02:00:49.981569 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 7 02:00:49.981738 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 02:00:49.981774 kernel: SELinux: policy capability open_perms=1 Mar 7 02:00:49.981794 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 02:00:49.981819 kernel: SELinux: policy capability always_check_network=0 Mar 7 02:00:49.981837 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 02:00:49.981922 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 02:00:49.981951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 02:00:49.981970 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 02:00:49.981987 kernel: audit: type=1403 audit(1772848840.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 02:00:49.982007 systemd[1]: Successfully loaded SELinux policy in 204.901ms. Mar 7 02:00:49.982054 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 74.551ms. Mar 7 02:00:49.982075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:00:49.982094 systemd[1]: Detected virtualization kvm. Mar 7 02:00:49.982113 systemd[1]: Detected architecture x86-64. Mar 7 02:00:49.982137 systemd[1]: Detected first boot. Mar 7 02:00:49.982156 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:00:49.982176 zram_generator::config[1058]: No configuration found. Mar 7 02:00:49.982204 systemd[1]: Populated /etc with preset unit settings. Mar 7 02:00:49.982223 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 02:00:49.982241 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 02:00:49.982260 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 02:00:49.982281 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 02:00:49.982306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 02:00:49.982326 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 02:00:49.982344 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 02:00:49.982364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 02:00:49.982384 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 02:00:49.982403 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 02:00:49.982422 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 02:00:49.982441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:00:49.982467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:00:49.982585 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 02:00:49.982610 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 02:00:49.982630 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 02:00:49.982650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:00:49.982669 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 02:00:49.982688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:00:49.982707 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 02:00:49.982725 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 02:00:49.982752 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 02:00:49.982772 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 02:00:49.982792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:00:49.982811 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:00:49.982829 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:00:49.982848 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:00:49.982931 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 02:00:49.982951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 02:00:49.982979 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:00:49.983001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:00:49.983020 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:00:49.983040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 02:00:49.983060 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 02:00:49.983086 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 02:00:49.983105 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 02:00:49.983123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:00:49.983142 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 02:00:49.983167 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 02:00:49.983186 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 02:00:49.983204 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 02:00:49.983223 systemd[1]: Reached target machines.target - Containers. Mar 7 02:00:49.983242 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 02:00:49.983260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:00:49.983280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:00:49.983299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 02:00:49.983323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:00:49.983341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:00:49.983358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:00:49.983374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 02:00:49.983392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:00:49.983410 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 02:00:49.983428 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 02:00:49.983444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 02:00:49.983460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 02:00:50.081324 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 02:00:50.081452 kernel: fuse: init (API version 7.39) Mar 7 02:00:50.081474 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:00:50.081780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:00:50.081799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:00:50.081816 kernel: loop: module loaded Mar 7 02:00:50.082003 systemd-journald[1145]: Collecting audit messages is disabled. Mar 7 02:00:50.082041 kernel: ACPI: bus type drm_connector registered Mar 7 02:00:50.082068 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 02:00:50.082089 systemd-journald[1145]: Journal started Mar 7 02:00:50.082118 systemd-journald[1145]: Runtime Journal (/run/log/journal/68d5a92359d644ca9f7081e2cbb99775) is 6.0M, max 48.4M, 42.3M free. Mar 7 02:00:50.089101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:00:46.100324 systemd[1]: Queued start job for default target multi-user.target. Mar 7 02:00:46.208961 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 02:00:46.212069 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 02:00:46.223776 systemd[1]: systemd-journald.service: Consumed 2.746s CPU time. Mar 7 02:00:50.209642 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 02:00:50.209827 systemd[1]: Stopped verity-setup.service. Mar 7 02:00:50.299034 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:00:50.362434 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:00:50.367689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 02:00:50.394372 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 02:00:50.442587 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 02:00:50.470617 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 02:00:50.483615 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 02:00:50.499573 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 02:00:50.516725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 02:00:50.529421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:00:50.549664 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 02:00:50.550057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 02:00:50.568759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:00:50.569173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:00:50.587233 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:00:50.587623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:00:50.601843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:00:50.602149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:00:50.615311 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 02:00:50.645834 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 02:00:50.671262 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:00:50.673735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:00:50.720230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:00:50.749736 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:00:50.785630 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 02:00:50.809408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:00:50.911773 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:00:50.973410 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 02:00:51.073572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 02:00:51.102243 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 02:00:51.102348 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:00:51.127042 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 02:00:51.176226 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 02:00:51.216962 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 02:00:51.228997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:00:51.262651 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 02:00:51.297255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 02:00:51.347366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:00:51.358328 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 02:00:51.387690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:00:51.418690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:00:51.481724 systemd-journald[1145]: Time spent on flushing to /var/log/journal/68d5a92359d644ca9f7081e2cbb99775 is 385.515ms for 941 entries. Mar 7 02:00:51.481724 systemd-journald[1145]: System Journal (/var/log/journal/68d5a92359d644ca9f7081e2cbb99775) is 8.0M, max 195.6M, 187.6M free. Mar 7 02:00:52.210137 systemd-journald[1145]: Received client request to flush runtime journal. Mar 7 02:00:52.210352 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 02:00:51.527826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 02:00:51.621437 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 02:00:51.668307 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 02:00:51.869623 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 02:00:51.902398 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 02:00:51.969905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 02:00:52.019076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 02:00:52.217687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 02:00:52.306228 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 02:00:52.380816 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 02:00:52.436327 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 02:00:52.554325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:00:52.679753 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 02:00:52.728467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 02:00:52.879776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:00:52.900765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 02:00:52.902459 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 02:00:52.907529 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 02:00:53.411956 kernel: loop2: detected capacity change from 0 to 217752 Mar 7 02:00:53.680697 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 7 02:00:53.680722 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 7 02:00:53.873645 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:00:54.056578 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 02:00:54.264899 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 02:00:54.436758 kernel: loop5: detected capacity change from 0 to 217752 Mar 7 02:00:54.561249 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 02:00:54.562947 (sd-merge)[1199]: Merged extensions into '/usr'. Mar 7 02:00:54.599059 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 02:00:54.599830 systemd[1]: Reloading... Mar 7 02:00:55.491092 zram_generator::config[1222]: No configuration found. Mar 7 02:00:55.990059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:00:56.021024 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 02:00:56.090826 systemd[1]: Reloading finished in 1485 ms. Mar 7 02:00:56.194127 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 02:00:56.214046 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 02:00:56.302084 systemd[1]: Starting ensure-sysext.service... Mar 7 02:00:56.328826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:00:56.373647 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 02:00:56.450314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:00:56.580369 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Mar 7 02:00:56.580420 systemd[1]: Reloading... Mar 7 02:00:56.613982 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:00:56.614794 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:00:56.661037 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:00:56.662169 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 7 02:00:56.663225 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 7 02:00:56.695310 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:00:56.696542 systemd-tmpfiles[1263]: Skipping /boot Mar 7 02:00:56.992468 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Mar 7 02:00:57.001194 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:00:57.001211 systemd-tmpfiles[1263]: Skipping /boot Mar 7 02:00:57.199161 zram_generator::config[1292]: No configuration found. Mar 7 02:00:58.316765 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 02:00:58.363352 kernel: ACPI: button: Power Button [PWRF] Mar 7 02:00:58.571759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1314) Mar 7 02:00:58.569094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:00:58.696258 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 02:00:58.728237 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 02:00:58.758344 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 02:00:58.758952 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 02:00:58.952374 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 02:00:58.953059 systemd[1]: Reloading finished in 2371 ms. Mar 7 02:00:58.994101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:00:59.023324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:00:59.394625 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 02:00:59.986849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:00.025398 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:01:00.262474 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 02:01:00.308010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:01:00.339466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:01:00.365814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:01:00.404228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:01:00.420470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:01:00.490033 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 02:01:00.664806 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:01:00.701586 augenrules[1380]: No rules Mar 7 02:01:00.894266 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:01:01.273386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 02:01:01.295866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:01:01.311266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:01.322614 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:01:01.351833 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 02:01:01.375132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:01:01.379198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:01:01.391955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:01:01.392222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:01:01.407474 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:01:01.408081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:01:01.425144 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 02:01:01.497097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:01:01.554724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:01.555130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:01:01.598714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:01:01.617119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:01:01.645228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:01:01.654626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:01:01.660684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:01:01.681196 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 02:01:01.703153 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 02:01:01.847580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 02:01:01.850280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 02:01:01.850735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:01.863203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 02:01:01.869132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:01:01.869588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:01:01.934401 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:01:01.934792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:01:01.955567 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:01:01.958734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:01:01.971996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:01:01.973158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:01:02.028136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 02:01:02.123764 systemd[1]: Finished ensure-sysext.service. Mar 7 02:01:02.230371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:01:02.235661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:01:02.390717 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 02:01:02.749553 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 02:01:03.188808 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 02:01:03.455096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:01:04.296429 systemd-networkd[1378]: lo: Link UP Mar 7 02:01:04.296447 systemd-networkd[1378]: lo: Gained carrier Mar 7 02:01:04.301848 systemd-networkd[1378]: Enumeration completed Mar 7 02:01:04.307101 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 02:01:04.312000 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:01:04.312008 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:01:04.316389 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:01:04.323647 systemd-networkd[1378]: eth0: Link UP Mar 7 02:01:04.325268 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 02:01:04.325797 systemd-networkd[1378]: eth0: Gained carrier Mar 7 02:01:04.325838 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:01:04.358152 systemd-resolved[1385]: Positive Trust Anchors: Mar 7 02:01:04.358178 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:01:04.358234 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:01:04.372017 systemd-resolved[1385]: Defaulting to hostname 'linux'. Mar 7 02:01:04.398462 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 02:01:04.422558 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:01:04.453103 systemd[1]: Reached target network.target - Network. Mar 7 02:01:04.487652 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:01:04.492770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:01:04.520587 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Mar 7 02:01:05.006034 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 02:01:05.006354 systemd-timesyncd[1413]: Initial clock synchronization to Sat 2026-03-07 02:01:05.004960 UTC. Mar 7 02:01:05.006717 systemd-resolved[1385]: Clock change detected. Flushing caches. Mar 7 02:01:05.659095 kernel: kvm_amd: TSC scaling supported Mar 7 02:01:05.660676 kernel: kvm_amd: Nested Virtualization enabled Mar 7 02:01:05.664329 kernel: kvm_amd: Nested Paging enabled Mar 7 02:01:05.664391 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 02:01:05.671259 kernel: kvm_amd: PMU virtualization is disabled Mar 7 02:01:06.385360 kernel: EDAC MC: Ver: 3.0.0 Mar 7 02:01:06.507132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 02:01:06.540291 systemd-networkd[1378]: eth0: Gained IPv6LL Mar 7 02:01:06.948509 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 02:01:06.986095 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 02:01:07.063570 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 02:01:07.086700 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:01:07.305353 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 02:01:07.346006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:01:07.366960 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:01:07.395778 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 02:01:07.581540 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 02:01:07.760674 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 02:01:07.801674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 02:01:07.827125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 02:01:07.856198 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 02:01:07.862475 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:01:07.872431 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:01:07.914187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 02:01:07.952174 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 02:01:08.021513 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 02:01:08.079504 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 02:01:08.114671 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 02:01:08.152142 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:01:08.153449 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:01:08.153572 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:01:08.153660 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:01:08.203754 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 02:01:08.256294 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 02:01:08.262600 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:01:08.314707 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 02:01:08.349578 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 02:01:08.378464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 02:01:08.401458 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 02:01:08.432773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:01:08.454766 jq[1437]: false Mar 7 02:01:08.463097 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 02:01:08.501590 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 02:01:08.523603 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 02:01:08.556924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 02:01:08.584329 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 02:01:08.590090 dbus-daemon[1436]: [system] SELinux support is enabled Mar 7 02:01:08.631110 extend-filesystems[1438]: Found loop3 Mar 7 02:01:08.631110 extend-filesystems[1438]: Found loop4 Mar 7 02:01:08.631110 extend-filesystems[1438]: Found loop5 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found sr0 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda1 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda2 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda3 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found usr Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda4 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda6 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda7 Mar 7 02:01:08.671019 extend-filesystems[1438]: Found vda9 Mar 7 02:01:08.671019 extend-filesystems[1438]: Checking size of /dev/vda9 Mar 7 02:01:09.185180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1319) Mar 7 02:01:09.185279 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 02:01:08.652032 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 02:01:09.185609 extend-filesystems[1438]: Resized partition /dev/vda9 Mar 7 02:01:08.773714 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 02:01:09.213411 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Mar 7 02:01:08.785964 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 02:01:08.817731 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 02:01:08.929003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 02:01:09.287496 jq[1464]: true Mar 7 02:01:09.019657 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 02:01:09.102204 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 02:01:09.319441 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 02:01:09.319949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 02:01:09.330959 update_engine[1462]: I20260307 02:01:09.324370 1462 main.cc:92] Flatcar Update Engine starting Mar 7 02:01:09.370495 update_engine[1462]: I20260307 02:01:09.334527 1462 update_check_scheduler.cc:74] Next update check in 10m39s Mar 7 02:01:09.336277 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 02:01:09.338934 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 02:01:09.352695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 02:01:09.395685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 02:01:09.396140 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 02:01:09.430072 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 02:01:09.525726 jq[1472]: true Mar 7 02:01:09.528930 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 02:01:09.528930 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 02:01:09.528930 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 02:01:09.560153 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Mar 7 02:01:09.561069 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 02:01:09.561164 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 02:01:09.583419 systemd-logind[1453]: New seat seat0. Mar 7 02:01:09.735158 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 02:01:09.758008 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 02:01:09.809507 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 02:01:09.810095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 02:01:09.836589 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 02:01:09.946408 dbus-daemon[1436]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 02:01:10.101275 tar[1471]: linux-amd64/LICENSE Mar 7 02:01:10.102044 tar[1471]: linux-amd64/helm Mar 7 02:01:10.119660 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 02:01:10.120401 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 02:01:10.330523 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 02:01:10.405280 systemd[1]: Started update-engine.service - Update Engine. Mar 7 02:01:10.457997 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Mar 7 02:01:10.459445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 02:01:10.562024 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 02:01:10.578713 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 02:01:10.583643 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 02:01:10.584000 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 02:01:10.584195 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 02:01:10.605679 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 02:01:10.605952 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 02:01:10.653295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 02:01:10.722115 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 02:01:10.742313 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 02:01:10.826304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 02:01:10.988450 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 02:01:11.048651 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 02:01:11.100720 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 02:01:11.105537 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 02:01:11.128394 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 02:01:11.434658 containerd[1473]: time="2026-03-07T02:01:11.433682462Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 02:01:11.561134 containerd[1473]: time="2026-03-07T02:01:11.561070475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.578120 containerd[1473]: time="2026-03-07T02:01:11.578061669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578419216Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578461746Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578733784Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578776804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578973050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579107 containerd[1473]: time="2026-03-07T02:01:11.578997837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579690 containerd[1473]: time="2026-03-07T02:01:11.579665123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579766 containerd[1473]: time="2026-03-07T02:01:11.579750071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579902 containerd[1473]: time="2026-03-07T02:01:11.579882038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:11.579961 containerd[1473]: time="2026-03-07T02:01:11.579948051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.580144 containerd[1473]: time="2026-03-07T02:01:11.580122777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.586110 containerd[1473]: time="2026-03-07T02:01:11.586055533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:11.586548 containerd[1473]: time="2026-03-07T02:01:11.586512907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:11.586648 containerd[1473]: time="2026-03-07T02:01:11.586628422Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 02:01:11.586975 containerd[1473]: time="2026-03-07T02:01:11.586948049Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 02:01:11.587163 containerd[1473]: time="2026-03-07T02:01:11.587140168Z" level=info msg="metadata content store policy set" policy=shared Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628106350Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628296855Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628332702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628384418Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628403814Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.628660304Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629083093Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629317140Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629341265Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629360150Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629378655Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629396819Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629418659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.633492 containerd[1473]: time="2026-03-07T02:01:11.629441061Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629465046Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629486937Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629509609Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629530678Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629560604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629584038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629605568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629633410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629665190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629688563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629707379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629730371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629750569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.634158 containerd[1473]: time="2026-03-07T02:01:11.629773512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638632110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638695548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638723010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638755751Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638804051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638914958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.638940857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639066692Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639100415Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639120301Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639140569Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639158733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639184131Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 02:01:11.639759 containerd[1473]: time="2026-03-07T02:01:11.639205681Z" level=info msg="NRI interface is disabled by configuration." Mar 7 02:01:11.649575 containerd[1473]: time="2026-03-07T02:01:11.642613834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.643222360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.643388561Z" level=info msg="Connect containerd service" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.643489229Z" level=info msg="using legacy CRI server" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.643505779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.643934520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.645409854Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648380160Z" level=info msg="Start subscribing containerd event" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648466290Z" level=info msg="Start recovering state" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648593959Z" level=info msg="Start event monitor" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648639744Z" level=info msg="Start snapshots syncer" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648665111Z" level=info msg="Start cni network conf syncer for default" Mar 7 02:01:11.649700 containerd[1473]: time="2026-03-07T02:01:11.648683807Z" level=info msg="Start streaming server" Mar 7 02:01:11.680457 containerd[1473]: time="2026-03-07T02:01:11.653361339Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 02:01:11.680457 containerd[1473]: time="2026-03-07T02:01:11.653458681Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 02:01:11.680457 containerd[1473]: time="2026-03-07T02:01:11.670975644Z" level=info msg="containerd successfully booted in 0.241247s" Mar 7 02:01:11.653651 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 02:01:13.599341 tar[1471]: linux-amd64/README.md Mar 7 02:01:13.829765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 02:01:14.099035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:01:14.113134 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:01:14.122509 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 02:01:14.162956 systemd[1]: Startup finished in 8.589s (kernel) + 46.529s (initrd) + 32.932s (userspace) = 1min 28.052s. Mar 7 02:01:15.520752 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 02:01:15.579949 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:46366.service - OpenSSH per-connection server daemon (10.0.0.1:46366). Mar 7 02:01:15.842302 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 46366 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:15.851577 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:15.927167 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 02:01:15.964337 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 02:01:15.990524 systemd-logind[1453]: New session 1 of user core. Mar 7 02:01:16.062472 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 02:01:16.084094 kubelet[1548]: E0307 02:01:16.078955 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:01:16.110326 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 02:01:16.116088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:01:16.116493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:01:16.117010 systemd[1]: kubelet.service: Consumed 1.920s CPU time. Mar 7 02:01:16.148107 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 02:01:16.657022 systemd[1565]: Queued start job for default target default.target. Mar 7 02:01:16.682443 systemd[1565]: Created slice app.slice - User Application Slice. Mar 7 02:01:16.686539 systemd[1565]: Reached target paths.target - Paths. Mar 7 02:01:16.689125 systemd[1565]: Reached target timers.target - Timers. Mar 7 02:01:16.703438 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 02:01:16.802514 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 02:01:16.802766 systemd[1565]: Reached target sockets.target - Sockets. Mar 7 02:01:16.802793 systemd[1565]: Reached target basic.target - Basic System. Mar 7 02:01:16.802954 systemd[1565]: Reached target default.target - Main User Target. Mar 7 02:01:16.803009 systemd[1565]: Startup finished in 623ms. Mar 7 02:01:16.804109 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 02:01:16.839620 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 02:01:17.040935 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Mar 7 02:01:17.200156 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:17.207059 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:17.241353 systemd-logind[1453]: New session 2 of user core. Mar 7 02:01:17.251474 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 02:01:17.374680 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:17.406341 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:46378.service: Deactivated successfully. Mar 7 02:01:17.414393 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 02:01:17.418692 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 7 02:01:17.453757 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:46384.service - OpenSSH per-connection server daemon (10.0.0.1:46384). Mar 7 02:01:17.457415 systemd-logind[1453]: Removed session 2. Mar 7 02:01:17.550361 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 46384 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:17.552354 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:17.600639 systemd-logind[1453]: New session 3 of user core. Mar 7 02:01:17.622423 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 02:01:17.729366 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:17.756009 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:46384.service: Deactivated successfully. Mar 7 02:01:17.758179 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 02:01:17.763184 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 7 02:01:17.776100 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:46400.service - OpenSSH per-connection server daemon (10.0.0.1:46400). Mar 7 02:01:17.785615 systemd-logind[1453]: Removed session 3. Mar 7 02:01:17.851332 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 46400 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:17.858548 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:17.873431 systemd-logind[1453]: New session 4 of user core. Mar 7 02:01:17.886537 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 02:01:17.989413 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:18.017537 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:46400.service: Deactivated successfully. Mar 7 02:01:18.020591 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 02:01:18.032630 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 7 02:01:18.051083 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:46402.service - OpenSSH per-connection server daemon (10.0.0.1:46402). Mar 7 02:01:18.064648 systemd-logind[1453]: Removed session 4. Mar 7 02:01:18.134629 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 46402 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:18.139420 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:18.157565 systemd-logind[1453]: New session 5 of user core. Mar 7 02:01:18.168431 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 02:01:18.317722 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 02:01:18.318648 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:01:19.472771 (dockerd)[1619]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 02:01:19.475230 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 02:01:20.667733 dockerd[1619]: time="2026-03-07T02:01:20.667626475Z" level=info msg="Starting up" Mar 7 02:01:21.250978 dockerd[1619]: time="2026-03-07T02:01:21.249409941Z" level=info msg="Loading containers: start." Mar 7 02:01:22.200403 kernel: Initializing XFRM netlink socket Mar 7 02:01:22.782224 systemd-networkd[1378]: docker0: Link UP Mar 7 02:01:22.884667 dockerd[1619]: time="2026-03-07T02:01:22.879934598Z" level=info msg="Loading containers: done." Mar 7 02:01:22.986053 dockerd[1619]: time="2026-03-07T02:01:22.984501438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 02:01:22.986053 dockerd[1619]: time="2026-03-07T02:01:22.984743580Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 02:01:22.986053 dockerd[1619]: time="2026-03-07T02:01:22.985059399Z" level=info msg="Daemon has completed initialization" Mar 7 02:01:23.186797 dockerd[1619]: time="2026-03-07T02:01:23.184623822Z" level=info msg="API listen on /run/docker.sock" Mar 7 02:01:23.186379 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 02:01:25.148628 containerd[1473]: time="2026-03-07T02:01:25.146938390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 02:01:26.210060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 02:01:26.257774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:01:27.354796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199212367.mount: Deactivated successfully. Mar 7 02:01:27.700908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:01:27.756076 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:01:28.254713 kubelet[1776]: E0307 02:01:28.253154 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:01:28.288706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:01:28.289081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:01:38.509684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 02:01:38.587392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:01:41.756627 containerd[1473]: time="2026-03-07T02:01:41.755759279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:01:42.017474 containerd[1473]: time="2026-03-07T02:01:41.847582158Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 02:01:42.017474 containerd[1473]: time="2026-03-07T02:01:41.920403450Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:01:42.596341 containerd[1473]: time="2026-03-07T02:01:42.540802550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:01:42.907116 containerd[1473]: time="2026-03-07T02:01:42.881904496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 17.734770914s" Mar 7 02:01:42.907116 containerd[1473]: time="2026-03-07T02:01:42.881988733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 02:01:42.949646 containerd[1473]: time="2026-03-07T02:01:42.947747333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 02:01:44.314711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:01:44.575225 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:01:46.483342 kubelet[1846]: E0307 02:01:46.482939 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:01:46.505993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:01:46.506374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:01:46.530134 systemd[1]: kubelet.service: Consumed 3.041s CPU time. Mar 7 02:01:54.151763 update_engine[1462]: I20260307 02:01:54.142416 1462 update_attempter.cc:509] Updating boot flags... Mar 7 02:01:57.167917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 02:01:57.553478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:00.598792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1866) Mar 7 02:02:03.410894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1865) Mar 7 02:02:04.834213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:04.851634 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:08.735338 kubelet[1881]: E0307 02:02:08.734619 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:08.783348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:08.787254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:08.802326 systemd[1]: kubelet.service: Consumed 4.369s CPU time. Mar 7 02:02:09.787351 containerd[1473]: time="2026-03-07T02:02:09.786743252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:09.809099 containerd[1473]: time="2026-03-07T02:02:09.808759196Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 02:02:09.856352 containerd[1473]: time="2026-03-07T02:02:09.855559888Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:09.882470 containerd[1473]: time="2026-03-07T02:02:09.878119002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:09.912165 containerd[1473]: time="2026-03-07T02:02:09.911100180Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 26.963283217s" Mar 7 02:02:09.912165 containerd[1473]: time="2026-03-07T02:02:09.911173431Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 02:02:10.017420 containerd[1473]: time="2026-03-07T02:02:10.008032448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 02:02:17.362055 containerd[1473]: time="2026-03-07T02:02:17.358769232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:17.370087 containerd[1473]: time="2026-03-07T02:02:17.369348582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 02:02:17.373636 containerd[1473]: time="2026-03-07T02:02:17.373477438Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:17.388620 containerd[1473]: time="2026-03-07T02:02:17.388161747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:17.393998 containerd[1473]: time="2026-03-07T02:02:17.393180145Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 7.375841172s" Mar 7 02:02:17.395995 containerd[1473]: time="2026-03-07T02:02:17.394066874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 02:02:17.408564 containerd[1473]: time="2026-03-07T02:02:17.403629876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 02:02:18.973612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 02:02:19.111705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:21.424167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:21.526653 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:24.067770 kubelet[1905]: E0307 02:02:24.067350 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:24.169317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:24.233583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:24.237652 systemd[1]: kubelet.service: Consumed 2.569s CPU time. Mar 7 02:02:26.449497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510886872.mount: Deactivated successfully. Mar 7 02:02:32.444073 containerd[1473]: time="2026-03-07T02:02:32.441568651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:32.476962 containerd[1473]: time="2026-03-07T02:02:32.466178027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 02:02:32.493076 containerd[1473]: time="2026-03-07T02:02:32.493014724Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:32.524138 containerd[1473]: time="2026-03-07T02:02:32.519258065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:32.524138 containerd[1473]: time="2026-03-07T02:02:32.521078249Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 15.117391354s" Mar 7 02:02:32.524138 containerd[1473]: time="2026-03-07T02:02:32.521129335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 02:02:32.527436 containerd[1473]: time="2026-03-07T02:02:32.526263935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 02:02:34.229524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 02:02:34.258441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:34.340135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538244592.mount: Deactivated successfully. Mar 7 02:02:36.067423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:36.080088 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:36.742259 kubelet[1940]: E0307 02:02:36.742118 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:36.758680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:36.765941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:41.274672 containerd[1473]: time="2026-03-07T02:02:41.273776312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:41.278473 containerd[1473]: time="2026-03-07T02:02:41.278395022Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 02:02:41.285953 containerd[1473]: time="2026-03-07T02:02:41.284369934Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:41.318260 containerd[1473]: time="2026-03-07T02:02:41.318146904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:41.323726 containerd[1473]: time="2026-03-07T02:02:41.323479964Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 8.797165404s" Mar 7 02:02:41.323726 containerd[1473]: time="2026-03-07T02:02:41.323559874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 02:02:41.327392 containerd[1473]: time="2026-03-07T02:02:41.327172648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 02:02:42.151646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555080254.mount: Deactivated successfully. Mar 7 02:02:42.185319 containerd[1473]: time="2026-03-07T02:02:42.184015676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:42.189262 containerd[1473]: time="2026-03-07T02:02:42.188466742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 02:02:42.194355 containerd[1473]: time="2026-03-07T02:02:42.194064370Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:42.210952 containerd[1473]: time="2026-03-07T02:02:42.209447050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:42.211491 containerd[1473]: time="2026-03-07T02:02:42.211333116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 883.759888ms" Mar 7 02:02:42.211491 containerd[1473]: time="2026-03-07T02:02:42.211382815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 02:02:42.215410 containerd[1473]: time="2026-03-07T02:02:42.215038126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 02:02:43.268995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564650646.mount: Deactivated successfully. Mar 7 02:02:47.840039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 02:02:47.894323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:51.379024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:51.573720 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:53.735361 kubelet[2056]: E0307 02:02:53.735135 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:53.760689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:53.761186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:53.767527 systemd[1]: kubelet.service: Consumed 2.767s CPU time. Mar 7 02:02:56.489562 containerd[1473]: time="2026-03-07T02:02:56.489231777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:56.504608 containerd[1473]: time="2026-03-07T02:02:56.504401369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 02:02:56.511195 containerd[1473]: time="2026-03-07T02:02:56.510174945Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:56.534063 containerd[1473]: time="2026-03-07T02:02:56.533485519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:56.541573 containerd[1473]: time="2026-03-07T02:02:56.541458522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 14.326369044s" Mar 7 02:02:56.541573 containerd[1473]: time="2026-03-07T02:02:56.541575976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 02:03:02.619743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:02.620093 systemd[1]: kubelet.service: Consumed 2.767s CPU time. Mar 7 02:03:02.650682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:02.840303 systemd[1]: Reloading requested from client PID 2108 ('systemctl') (unit session-5.scope)... Mar 7 02:03:02.840328 systemd[1]: Reloading... Mar 7 02:03:03.239080 zram_generator::config[2150]: No configuration found. Mar 7 02:03:03.869289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:03:04.380364 systemd[1]: Reloading finished in 1530 ms. Mar 7 02:03:04.724339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:04.734596 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:03:04.772652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:04.780728 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:03:04.781738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:04.821998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:05.354339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:05.418938 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:03:05.723030 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:03:06.306056 kubelet[2202]: I0307 02:03:06.303608 2202 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:03:06.306056 kubelet[2202]: I0307 02:03:06.305914 2202 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:03:06.306056 kubelet[2202]: I0307 02:03:06.305947 2202 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:03:06.306056 kubelet[2202]: I0307 02:03:06.305957 2202 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:03:06.310043 kubelet[2202]: I0307 02:03:06.307169 2202 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:03:06.448013 kubelet[2202]: E0307 02:03:06.447157 2202 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:03:06.456925 kubelet[2202]: I0307 02:03:06.456388 2202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:03:06.490884 kubelet[2202]: E0307 02:03:06.487056 2202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:03:06.490884 kubelet[2202]: I0307 02:03:06.487153 2202 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:03:06.530496 kubelet[2202]: I0307 02:03:06.529149 2202 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:03:06.541171 kubelet[2202]: I0307 02:03:06.539612 2202 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:03:06.541171 kubelet[2202]: I0307 02:03:06.539727 2202 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:03:06.541171 kubelet[2202]: I0307 02:03:06.540215 2202 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:03:06.541171 kubelet[2202]: I0307 02:03:06.540231 2202 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:03:06.541979 kubelet[2202]: I0307 02:03:06.540442 2202 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:03:06.553572 kubelet[2202]: I0307 02:03:06.549577 2202 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:03:06.553572 kubelet[2202]: I0307 02:03:06.550012 2202 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:03:06.553572 kubelet[2202]: I0307 02:03:06.550035 2202 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:03:06.553572 kubelet[2202]: I0307 02:03:06.550077 2202 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:03:06.553572 kubelet[2202]: I0307 02:03:06.550095 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:03:06.563280 kubelet[2202]: I0307 02:03:06.561084 2202 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:03:06.584637 kubelet[2202]: I0307 02:03:06.581651 2202 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:03:06.584637 kubelet[2202]: I0307 02:03:06.581736 2202 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:03:06.584637 kubelet[2202]: W0307 02:03:06.582007 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 02:03:06.658137 kubelet[2202]: I0307 02:03:06.652671 2202 server.go:1257] "Started kubelet" Mar 7 02:03:06.666498 kubelet[2202]: E0307 02:03:06.657021 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6cc2a7f5bc2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:03:06.621066282 +0000 UTC m=+1.180348820,LastTimestamp:2026-03-07 02:03:06.621066282 +0000 UTC m=+1.180348820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:03:06.671412 kubelet[2202]: I0307 02:03:06.667706 2202 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:03:06.671412 kubelet[2202]: I0307 02:03:06.667999 2202 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:03:06.671412 kubelet[2202]: I0307 02:03:06.669083 2202 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:03:06.671412 kubelet[2202]: I0307 02:03:06.669229 2202 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:03:06.679256 kubelet[2202]: I0307 02:03:06.678512 2202 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:03:06.694877 kubelet[2202]: I0307 02:03:06.690110 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:03:06.723406 kubelet[2202]: I0307 02:03:06.720792 2202 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:03:06.723406 kubelet[2202]: E0307 02:03:06.721758 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:06.723406 kubelet[2202]: I0307 02:03:06.721805 2202 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:03:06.730593 kubelet[2202]: E0307 02:03:06.726965 2202 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Mar 7 02:03:06.730593 kubelet[2202]: I0307 02:03:06.727310 2202 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:03:06.730593 kubelet[2202]: I0307 02:03:06.730181 2202 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:03:06.743627 kubelet[2202]: I0307 02:03:06.737422 2202 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:03:06.743627 kubelet[2202]: E0307 02:03:06.740604 2202 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:03:06.743627 kubelet[2202]: I0307 02:03:06.741332 2202 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:03:06.743627 kubelet[2202]: I0307 02:03:06.741348 2202 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:03:06.825112 kubelet[2202]: E0307 02:03:06.824303 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:06.854774 kubelet[2202]: I0307 02:03:06.852990 2202 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:03:06.854774 kubelet[2202]: I0307 02:03:06.853020 2202 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:03:06.854774 kubelet[2202]: I0307 02:03:06.853064 2202 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:03:06.864549 kubelet[2202]: I0307 02:03:06.863793 2202 policy_none.go:50] "Start" Mar 7 02:03:06.864549 kubelet[2202]: I0307 02:03:06.863948 2202 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:03:06.864549 kubelet[2202]: I0307 02:03:06.864038 2202 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:03:06.885176 kubelet[2202]: I0307 02:03:06.884304 2202 policy_none.go:44] "Start" Mar 7 02:03:06.933988 kubelet[2202]: E0307 02:03:06.927875 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:06.940640 kubelet[2202]: I0307 02:03:06.936545 2202 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:03:06.949905 kubelet[2202]: E0307 02:03:06.946943 2202 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Mar 7 02:03:06.959113 kubelet[2202]: I0307 02:03:06.955343 2202 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:03:06.959113 kubelet[2202]: I0307 02:03:06.955428 2202 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:03:06.959113 kubelet[2202]: I0307 02:03:06.955540 2202 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:03:06.959113 kubelet[2202]: E0307 02:03:06.955637 2202 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:03:06.962744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 02:03:07.028204 kubelet[2202]: E0307 02:03:07.027971 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:07.034610 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 02:03:07.053026 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 02:03:07.055901 kubelet[2202]: E0307 02:03:07.055780 2202 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:07.080626 kubelet[2202]: E0307 02:03:07.079127 2202 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:03:07.082787 kubelet[2202]: I0307 02:03:07.081350 2202 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:03:07.082787 kubelet[2202]: I0307 02:03:07.081428 2202 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:03:07.089805 kubelet[2202]: I0307 02:03:07.087535 2202 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:03:07.095026 kubelet[2202]: E0307 02:03:07.094772 2202 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:03:07.095026 kubelet[2202]: E0307 02:03:07.094932 2202 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:03:07.192199 kubelet[2202]: I0307 02:03:07.191082 2202 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:07.211059 kubelet[2202]: E0307 02:03:07.195615 2202 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 7 02:03:07.328713 systemd[1]: Created slice kubepods-burstable-podc8ff0e882b798fcbcbd96cb3cd12bf87.slice - libcontainer container kubepods-burstable-podc8ff0e882b798fcbcbd96cb3cd12bf87.slice. Mar 7 02:03:07.348915 kubelet[2202]: I0307 02:03:07.348671 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:07.348915 kubelet[2202]: I0307 02:03:07.348767 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:07.348915 kubelet[2202]: I0307 02:03:07.348807 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:07.348915 kubelet[2202]: I0307 02:03:07.348906 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:07.349164 kubelet[2202]: I0307 02:03:07.348929 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:07.349164 kubelet[2202]: I0307 02:03:07.348949 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:07.349164 kubelet[2202]: I0307 02:03:07.348970 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:07.349164 kubelet[2202]: I0307 02:03:07.348989 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:07.349164 kubelet[2202]: I0307 02:03:07.349007 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:07.349340 kubelet[2202]: E0307 02:03:07.349240 2202 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Mar 7 02:03:07.389569 kubelet[2202]: E0307 02:03:07.386288 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:07.400499 kubelet[2202]: I0307 02:03:07.399964 2202 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:07.400499 kubelet[2202]: E0307 02:03:07.400391 2202 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 7 02:03:07.415019 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 02:03:07.430640 kubelet[2202]: E0307 02:03:07.430002 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:07.439412 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 02:03:07.460920 kubelet[2202]: E0307 02:03:07.458267 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:07.472151 kubelet[2202]: E0307 02:03:07.470898 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:07.477280 containerd[1473]: time="2026-03-07T02:03:07.476359058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:07.703573 kubelet[2202]: E0307 02:03:07.702326 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:07.712722 containerd[1473]: time="2026-03-07T02:03:07.710696678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8ff0e882b798fcbcbd96cb3cd12bf87,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:07.748531 kubelet[2202]: E0307 02:03:07.746515 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:07.759056 containerd[1473]: time="2026-03-07T02:03:07.758918013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:07.807562 kubelet[2202]: I0307 02:03:07.807527 2202 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:07.816112 kubelet[2202]: E0307 02:03:07.816047 2202 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 7 02:03:08.151574 kubelet[2202]: E0307 02:03:08.151178 2202 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Mar 7 02:03:08.409662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500474822.mount: Deactivated successfully. Mar 7 02:03:08.466685 containerd[1473]: time="2026-03-07T02:03:08.465023764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:08.478503 containerd[1473]: time="2026-03-07T02:03:08.477051255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 02:03:08.484661 containerd[1473]: time="2026-03-07T02:03:08.484287075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:08.487613 containerd[1473]: time="2026-03-07T02:03:08.487169509Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:08.493620 containerd[1473]: time="2026-03-07T02:03:08.491884409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:03:08.495989 containerd[1473]: time="2026-03-07T02:03:08.495797551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:03:08.495989 containerd[1473]: time="2026-03-07T02:03:08.495953139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:08.503756 containerd[1473]: time="2026-03-07T02:03:08.503553604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:08.514346 kubelet[2202]: E0307 02:03:08.514101 2202 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:03:08.515916 containerd[1473]: time="2026-03-07T02:03:08.515794534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.039253863s" Mar 7 02:03:08.519717 containerd[1473]: time="2026-03-07T02:03:08.519343143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 808.539779ms" Mar 7 02:03:08.537806 containerd[1473]: time="2026-03-07T02:03:08.536994634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 777.937466ms" Mar 7 02:03:08.620070 kubelet[2202]: I0307 02:03:08.619597 2202 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:08.621436 kubelet[2202]: E0307 02:03:08.620210 2202 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 7 02:03:08.942327 containerd[1473]: time="2026-03-07T02:03:08.941449483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:08.945774 containerd[1473]: time="2026-03-07T02:03:08.943075376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:08.945774 containerd[1473]: time="2026-03-07T02:03:08.943497076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:08.945774 containerd[1473]: time="2026-03-07T02:03:08.943729671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:08.949649 containerd[1473]: time="2026-03-07T02:03:08.948565854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:08.949649 containerd[1473]: time="2026-03-07T02:03:08.948644688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:08.949649 containerd[1473]: time="2026-03-07T02:03:08.948668241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:08.959903 containerd[1473]: time="2026-03-07T02:03:08.956543559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:08.959903 containerd[1473]: time="2026-03-07T02:03:08.956684697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:08.959903 containerd[1473]: time="2026-03-07T02:03:08.956910770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:08.959903 containerd[1473]: time="2026-03-07T02:03:08.955076530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:08.960777 containerd[1473]: time="2026-03-07T02:03:08.960448729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:09.038662 systemd[1]: Started cri-containerd-7818b7a06d8591b2fe4f5e8a79c27c1fe239381832afafb627eed6b4f5bee8e4.scope - libcontainer container 7818b7a06d8591b2fe4f5e8a79c27c1fe239381832afafb627eed6b4f5bee8e4. Mar 7 02:03:09.059906 systemd[1]: Started cri-containerd-78c4a43a0266d17715f08bc8525bf0a5bb620b3b6ee1878cd5909bf6485012e7.scope - libcontainer container 78c4a43a0266d17715f08bc8525bf0a5bb620b3b6ee1878cd5909bf6485012e7. Mar 7 02:03:09.068316 systemd[1]: Started cri-containerd-db6230a7d8608cf0c130e4be77a606b669325938677d57dce4df71cbe85ed58c.scope - libcontainer container db6230a7d8608cf0c130e4be77a606b669325938677d57dce4df71cbe85ed58c. Mar 7 02:03:09.296191 containerd[1473]: time="2026-03-07T02:03:09.296093364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6230a7d8608cf0c130e4be77a606b669325938677d57dce4df71cbe85ed58c\"" Mar 7 02:03:09.302896 kubelet[2202]: E0307 02:03:09.302682 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:09.313808 containerd[1473]: time="2026-03-07T02:03:09.313612362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8ff0e882b798fcbcbd96cb3cd12bf87,Namespace:kube-system,Attempt:0,} returns sandbox id \"7818b7a06d8591b2fe4f5e8a79c27c1fe239381832afafb627eed6b4f5bee8e4\"" Mar 7 02:03:09.318903 kubelet[2202]: E0307 02:03:09.317965 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:09.351299 containerd[1473]: time="2026-03-07T02:03:09.350582830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c4a43a0266d17715f08bc8525bf0a5bb620b3b6ee1878cd5909bf6485012e7\"" Mar 7 02:03:09.354210 containerd[1473]: time="2026-03-07T02:03:09.353482834Z" level=info msg="CreateContainer within sandbox \"db6230a7d8608cf0c130e4be77a606b669325938677d57dce4df71cbe85ed58c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 02:03:09.354339 kubelet[2202]: E0307 02:03:09.353779 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:09.376068 containerd[1473]: time="2026-03-07T02:03:09.370215610Z" level=info msg="CreateContainer within sandbox \"7818b7a06d8591b2fe4f5e8a79c27c1fe239381832afafb627eed6b4f5bee8e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 02:03:09.412954 containerd[1473]: time="2026-03-07T02:03:09.410720341Z" level=info msg="CreateContainer within sandbox \"78c4a43a0266d17715f08bc8525bf0a5bb620b3b6ee1878cd5909bf6485012e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 02:03:09.474558 containerd[1473]: time="2026-03-07T02:03:09.474342284Z" level=info msg="CreateContainer within sandbox \"db6230a7d8608cf0c130e4be77a606b669325938677d57dce4df71cbe85ed58c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ada9d705b5a8e453b4a53b4e3f2962ffedf81d0c7c0d822fe01723b13a35584\"" Mar 7 02:03:09.476805 containerd[1473]: time="2026-03-07T02:03:09.476336659Z" level=info msg="StartContainer for \"6ada9d705b5a8e453b4a53b4e3f2962ffedf81d0c7c0d822fe01723b13a35584\"" Mar 7 02:03:09.504154 containerd[1473]: time="2026-03-07T02:03:09.504038533Z" level=info msg="CreateContainer within sandbox \"7818b7a06d8591b2fe4f5e8a79c27c1fe239381832afafb627eed6b4f5bee8e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffcbd354bff061dc39d6771c6864378da7435c75dc82d1e063ac4db523ee52bd\"" Mar 7 02:03:09.505228 containerd[1473]: time="2026-03-07T02:03:09.505200528Z" level=info msg="StartContainer for \"ffcbd354bff061dc39d6771c6864378da7435c75dc82d1e063ac4db523ee52bd\"" Mar 7 02:03:09.530413 containerd[1473]: time="2026-03-07T02:03:09.529900920Z" level=info msg="CreateContainer within sandbox \"78c4a43a0266d17715f08bc8525bf0a5bb620b3b6ee1878cd5909bf6485012e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c5270b8d145edd8054e7ea967b1d6ed69cf2730e8046ced46e8595b92d31c1d7\"" Mar 7 02:03:09.548569 containerd[1473]: time="2026-03-07T02:03:09.546490365Z" level=info msg="StartContainer for \"c5270b8d145edd8054e7ea967b1d6ed69cf2730e8046ced46e8595b92d31c1d7\"" Mar 7 02:03:09.598114 systemd[1]: Started cri-containerd-6ada9d705b5a8e453b4a53b4e3f2962ffedf81d0c7c0d822fe01723b13a35584.scope - libcontainer container 6ada9d705b5a8e453b4a53b4e3f2962ffedf81d0c7c0d822fe01723b13a35584. Mar 7 02:03:09.651461 systemd[1]: Started cri-containerd-ffcbd354bff061dc39d6771c6864378da7435c75dc82d1e063ac4db523ee52bd.scope - libcontainer container ffcbd354bff061dc39d6771c6864378da7435c75dc82d1e063ac4db523ee52bd. Mar 7 02:03:09.668925 systemd[1]: Started cri-containerd-c5270b8d145edd8054e7ea967b1d6ed69cf2730e8046ced46e8595b92d31c1d7.scope - libcontainer container c5270b8d145edd8054e7ea967b1d6ed69cf2730e8046ced46e8595b92d31c1d7. Mar 7 02:03:09.753981 kubelet[2202]: E0307 02:03:09.753797 2202 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="3.2s" Mar 7 02:03:09.826777 containerd[1473]: time="2026-03-07T02:03:09.822502721Z" level=info msg="StartContainer for \"6ada9d705b5a8e453b4a53b4e3f2962ffedf81d0c7c0d822fe01723b13a35584\" returns successfully" Mar 7 02:03:09.865500 containerd[1473]: time="2026-03-07T02:03:09.862605895Z" level=info msg="StartContainer for \"ffcbd354bff061dc39d6771c6864378da7435c75dc82d1e063ac4db523ee52bd\" returns successfully" Mar 7 02:03:09.906921 containerd[1473]: time="2026-03-07T02:03:09.906393518Z" level=info msg="StartContainer for \"c5270b8d145edd8054e7ea967b1d6ed69cf2730e8046ced46e8595b92d31c1d7\" returns successfully" Mar 7 02:03:10.097662 kubelet[2202]: E0307 02:03:10.094525 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:10.107679 kubelet[2202]: E0307 02:03:10.107125 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:10.112489 kubelet[2202]: E0307 02:03:10.111968 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:10.112489 kubelet[2202]: E0307 02:03:10.112179 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:10.122259 kubelet[2202]: E0307 02:03:10.122176 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:10.122998 kubelet[2202]: E0307 02:03:10.122569 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:10.228993 kubelet[2202]: I0307 02:03:10.228946 2202 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:11.124431 kubelet[2202]: E0307 02:03:11.124270 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:11.128546 kubelet[2202]: E0307 02:03:11.124937 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:11.128546 kubelet[2202]: E0307 02:03:11.128271 2202 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:11.128546 kubelet[2202]: E0307 02:03:11.128518 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:13.421680 kubelet[2202]: E0307 02:03:13.416994 2202 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 02:03:13.721580 kubelet[2202]: I0307 02:03:13.706389 2202 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:03:13.721580 kubelet[2202]: E0307 02:03:13.709995 2202 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 02:03:13.917372 kubelet[2202]: E0307 02:03:13.916956 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:14.059349 kubelet[2202]: E0307 02:03:14.024515 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:14.447281 kubelet[2202]: E0307 02:03:14.196134 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:14.834116 kubelet[2202]: E0307 02:03:14.827461 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:15.092352 kubelet[2202]: E0307 02:03:15.087932 2202 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:15.092352 kubelet[2202]: I0307 02:03:15.089451 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:15.516230 kubelet[2202]: I0307 02:03:15.322089 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:16.120725 kubelet[2202]: I0307 02:03:16.029981 2202 apiserver.go:52] "Watching apiserver" Mar 7 02:03:16.678396 kubelet[2202]: I0307 02:03:16.569673 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:17.088629 kubelet[2202]: I0307 02:03:17.087709 2202 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:03:17.865729 kubelet[2202]: I0307 02:03:17.861592 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:17.865729 kubelet[2202]: E0307 02:03:17.874224 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:18.986046 kubelet[2202]: E0307 02:03:18.965700 2202 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:18.986046 kubelet[2202]: I0307 02:03:18.991696 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:19.559638 kubelet[2202]: E0307 02:03:18.915934 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:21.770107 kubelet[2202]: E0307 02:03:21.769684 2202 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.714s" Mar 7 02:03:22.178937 kubelet[2202]: I0307 02:03:22.177560 2202 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:22.283264 kubelet[2202]: E0307 02:03:22.283143 2202 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:23.129130 kubelet[2202]: E0307 02:03:23.095416 2202 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.323s" Mar 7 02:03:24.584893 kubelet[2202]: E0307 02:03:24.531489 2202 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:24.584893 kubelet[2202]: E0307 02:03:24.532221 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:25.392519 kubelet[2202]: E0307 02:03:25.388701 2202 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.437s" Mar 7 02:03:25.571992 kubelet[2202]: E0307 02:03:25.570593 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:25.571992 kubelet[2202]: E0307 02:03:25.571774 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:25.721503 kubelet[2202]: I0307 02:03:25.719345 2202 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.719305298 podStartE2EDuration="9.719305298s" podCreationTimestamp="2026-03-07 02:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:03:25.71680808 +0000 UTC m=+20.276090628" watchObservedRunningTime="2026-03-07 02:03:25.719305298 +0000 UTC m=+20.278587805" Mar 7 02:03:25.832938 kubelet[2202]: I0307 02:03:25.826265 2202 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.826246787 podStartE2EDuration="9.826246787s" podCreationTimestamp="2026-03-07 02:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:03:25.819236873 +0000 UTC m=+20.378519382" watchObservedRunningTime="2026-03-07 02:03:25.826246787 +0000 UTC m=+20.385529315" Mar 7 02:03:25.912494 kubelet[2202]: I0307 02:03:25.907637 2202 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=9.907617896 podStartE2EDuration="9.907617896s" podCreationTimestamp="2026-03-07 02:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:03:25.907446381 +0000 UTC m=+20.466728930" watchObservedRunningTime="2026-03-07 02:03:25.907617896 +0000 UTC m=+20.466900414" Mar 7 02:03:26.337046 kubelet[2202]: E0307 02:03:26.329721 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:26.337046 kubelet[2202]: E0307 02:03:26.330323 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:34.364388 systemd[1]: Reloading requested from client PID 2498 ('systemctl') (unit session-5.scope)... Mar 7 02:03:34.364448 systemd[1]: Reloading... Mar 7 02:03:37.406004 zram_generator::config[2534]: No configuration found. Mar 7 02:03:37.993623 kubelet[2202]: E0307 02:03:37.992209 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:39.274214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:03:39.873459 systemd[1]: Reloading finished in 5508 ms. Mar 7 02:03:40.163649 kubelet[2202]: E0307 02:03:40.159223 2202 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:40.516919 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:40.583776 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:03:40.584329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:40.584398 systemd[1]: kubelet.service: Consumed 8.463s CPU time, 133.8M memory peak, 0B memory swap peak. Mar 7 02:03:40.632456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:42.028379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:42.068336 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:03:42.981010 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:03:43.035784 kubelet[2582]: I0307 02:03:43.032474 2582 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:03:43.036271 kubelet[2582]: I0307 02:03:43.036246 2582 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:03:43.036441 kubelet[2582]: I0307 02:03:43.036424 2582 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:03:43.036532 kubelet[2582]: I0307 02:03:43.036516 2582 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:03:43.038900 kubelet[2582]: I0307 02:03:43.037224 2582 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:03:43.043943 kubelet[2582]: I0307 02:03:43.041262 2582 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 02:03:43.050272 kubelet[2582]: I0307 02:03:43.050175 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:03:43.183292 kubelet[2582]: E0307 02:03:43.183247 2582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:03:43.183693 kubelet[2582]: I0307 02:03:43.183633 2582 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:03:43.230787 kubelet[2582]: I0307 02:03:43.230633 2582 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:03:43.237922 kubelet[2582]: I0307 02:03:43.231673 2582 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:03:43.237922 kubelet[2582]: I0307 02:03:43.231773 2582 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:03:43.237922 kubelet[2582]: I0307 02:03:43.232203 2582 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:03:43.237922 kubelet[2582]: I0307 02:03:43.232219 2582 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.232256 2582 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.232531 2582 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.233403 2582 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.233421 2582 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.233444 2582 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:03:43.238396 kubelet[2582]: I0307 02:03:43.233459 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:03:43.247393 kubelet[2582]: I0307 02:03:43.245954 2582 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:03:43.256663 kubelet[2582]: I0307 02:03:43.250776 2582 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:03:43.256663 kubelet[2582]: I0307 02:03:43.251047 2582 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:03:43.407914 kubelet[2582]: I0307 02:03:43.406614 2582 server.go:1257] "Started kubelet" Mar 7 02:03:43.407914 kubelet[2582]: I0307 02:03:43.408021 2582 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:03:43.420240 kubelet[2582]: I0307 02:03:43.410780 2582 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:03:43.420240 kubelet[2582]: I0307 02:03:43.418597 2582 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:03:43.420240 kubelet[2582]: I0307 02:03:43.418748 2582 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:03:43.420240 kubelet[2582]: I0307 02:03:43.419483 2582 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:03:43.464747 kubelet[2582]: I0307 02:03:43.463501 2582 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:03:43.491071 kubelet[2582]: I0307 02:03:43.487230 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:03:43.588978 kubelet[2582]: I0307 02:03:43.579641 2582 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:03:43.588978 kubelet[2582]: I0307 02:03:43.580230 2582 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:03:43.588978 kubelet[2582]: I0307 02:03:43.580463 2582 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:03:43.648972 kubelet[2582]: I0307 02:03:43.620135 2582 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:03:43.749017 kubelet[2582]: I0307 02:03:43.732273 2582 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:03:43.812949 kubelet[2582]: I0307 02:03:43.798136 2582 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:03:43.844510 kubelet[2582]: I0307 02:03:43.844361 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:03:43.856699 kubelet[2582]: I0307 02:03:43.856452 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:03:43.856699 kubelet[2582]: I0307 02:03:43.856479 2582 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:03:43.856699 kubelet[2582]: I0307 02:03:43.856513 2582 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:03:43.856699 kubelet[2582]: E0307 02:03:43.856657 2582 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:03:43.960059 kubelet[2582]: E0307 02:03:43.958052 2582 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:44.341281 kubelet[2582]: E0307 02:03:44.228276 2582 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:44.341281 kubelet[2582]: I0307 02:03:44.295627 2582 apiserver.go:52] "Watching apiserver" Mar 7 02:03:44.633669 kubelet[2582]: E0307 02:03:44.630906 2582 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:44.649562 kubelet[2582]: I0307 02:03:44.649207 2582 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:03:44.649562 kubelet[2582]: I0307 02:03:44.649260 2582 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:03:44.649562 kubelet[2582]: I0307 02:03:44.649291 2582 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:03:44.649788 kubelet[2582]: I0307 02:03:44.649624 2582 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 02:03:44.649788 kubelet[2582]: I0307 02:03:44.649642 2582 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 02:03:44.649788 kubelet[2582]: I0307 02:03:44.649665 2582 policy_none.go:50] "Start" Mar 7 02:03:44.649788 kubelet[2582]: I0307 02:03:44.649675 2582 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:03:44.649788 kubelet[2582]: I0307 02:03:44.649688 2582 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:03:44.650167 kubelet[2582]: I0307 02:03:44.650003 2582 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 02:03:44.650167 kubelet[2582]: I0307 02:03:44.650019 2582 policy_none.go:44] "Start" Mar 7 02:03:44.907997 kubelet[2582]: E0307 02:03:44.905360 2582 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:03:44.912739 kubelet[2582]: I0307 02:03:44.909944 2582 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:03:44.912739 kubelet[2582]: I0307 02:03:44.909965 2582 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:03:44.915701 kubelet[2582]: I0307 02:03:44.915277 2582 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:03:44.924195 kubelet[2582]: I0307 02:03:44.917615 2582 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 02:03:44.945882 containerd[1473]: time="2026-03-07T02:03:44.940755101Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 02:03:44.947703 kubelet[2582]: I0307 02:03:44.943233 2582 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 02:03:44.947703 kubelet[2582]: E0307 02:03:44.945914 2582 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:03:45.529093 kubelet[2582]: I0307 02:03:45.528574 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:45.529093 kubelet[2582]: I0307 02:03:45.528638 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:45.529093 kubelet[2582]: I0307 02:03:45.528738 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:45.529093 kubelet[2582]: I0307 02:03:45.528769 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:45.529093 kubelet[2582]: I0307 02:03:45.528792 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.529144 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.529313 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.529716 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8ff0e882b798fcbcbd96cb3cd12bf87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8ff0e882b798fcbcbd96cb3cd12bf87\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.530163 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.533283 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:45.549307 kubelet[2582]: I0307 02:03:45.541453 2582 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:45.571284 systemd[1]: Created slice kubepods-besteffort-pod7103d79c_c240_440d_88f7_5dc887211255.slice - libcontainer container kubepods-besteffort-pod7103d79c_c240_440d_88f7_5dc887211255.slice. Mar 7 02:03:45.584911 kubelet[2582]: I0307 02:03:45.581203 2582 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:03:45.626688 kubelet[2582]: I0307 02:03:45.624659 2582 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 02:03:45.626688 kubelet[2582]: I0307 02:03:45.624802 2582 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:03:45.635985 kubelet[2582]: E0307 02:03:45.625667 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:45.635985 kubelet[2582]: I0307 02:03:45.631267 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7103d79c-c240-440d-88f7-5dc887211255-xtables-lock\") pod \"kube-proxy-djl57\" (UID: \"7103d79c-c240-440d-88f7-5dc887211255\") " pod="kube-system/kube-proxy-djl57" Mar 7 02:03:45.635985 kubelet[2582]: I0307 02:03:45.631344 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7103d79c-c240-440d-88f7-5dc887211255-lib-modules\") pod \"kube-proxy-djl57\" (UID: \"7103d79c-c240-440d-88f7-5dc887211255\") " pod="kube-system/kube-proxy-djl57" Mar 7 02:03:45.635985 kubelet[2582]: I0307 02:03:45.631763 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7103d79c-c240-440d-88f7-5dc887211255-kube-proxy\") pod \"kube-proxy-djl57\" (UID: \"7103d79c-c240-440d-88f7-5dc887211255\") " pod="kube-system/kube-proxy-djl57" Mar 7 02:03:45.635985 kubelet[2582]: I0307 02:03:45.631911 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrprr\" (UniqueName: \"kubernetes.io/projected/7103d79c-c240-440d-88f7-5dc887211255-kube-api-access-zrprr\") pod \"kube-proxy-djl57\" (UID: \"7103d79c-c240-440d-88f7-5dc887211255\") " pod="kube-system/kube-proxy-djl57" Mar 7 02:03:45.860984 kubelet[2582]: E0307 02:03:45.838955 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:46.059067 kubelet[2582]: E0307 02:03:46.055552 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:46.064541 kubelet[2582]: E0307 02:03:46.062116 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:46.978721 kubelet[2582]: E0307 02:03:46.976285 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:47.031783 kubelet[2582]: E0307 02:03:46.982090 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:47.031783 kubelet[2582]: E0307 02:03:46.983108 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:47.031783 kubelet[2582]: E0307 02:03:47.029790 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:47.044095 containerd[1473]: time="2026-03-07T02:03:47.041776850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djl57,Uid:7103d79c-c240-440d-88f7-5dc887211255,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:47.814120 containerd[1473]: time="2026-03-07T02:03:47.789393943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:47.824263 containerd[1473]: time="2026-03-07T02:03:47.819359218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:47.828730 containerd[1473]: time="2026-03-07T02:03:47.828165957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:47.862655 containerd[1473]: time="2026-03-07T02:03:47.858072844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:47.986774 kubelet[2582]: E0307 02:03:47.986736 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:47.989737 kubelet[2582]: E0307 02:03:47.987365 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:48.607588 systemd[1]: Started cri-containerd-935bc2cbcfc2449615ec381a47acdc3fba7af185ec7bf7814d841ba9fed2db0b.scope - libcontainer container 935bc2cbcfc2449615ec381a47acdc3fba7af185ec7bf7814d841ba9fed2db0b. Mar 7 02:03:50.172174 containerd[1473]: time="2026-03-07T02:03:50.171965972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djl57,Uid:7103d79c-c240-440d-88f7-5dc887211255,Namespace:kube-system,Attempt:0,} returns sandbox id \"935bc2cbcfc2449615ec381a47acdc3fba7af185ec7bf7814d841ba9fed2db0b\"" Mar 7 02:03:50.174907 kubelet[2582]: E0307 02:03:50.173773 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:50.563401 containerd[1473]: time="2026-03-07T02:03:50.563338887Z" level=info msg="CreateContainer within sandbox \"935bc2cbcfc2449615ec381a47acdc3fba7af185ec7bf7814d841ba9fed2db0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 02:03:50.693982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693877135.mount: Deactivated successfully. Mar 7 02:03:50.709156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683047772.mount: Deactivated successfully. Mar 7 02:03:50.753991 containerd[1473]: time="2026-03-07T02:03:50.752223258Z" level=info msg="CreateContainer within sandbox \"935bc2cbcfc2449615ec381a47acdc3fba7af185ec7bf7814d841ba9fed2db0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07a2cb6ab86157d1f6af7069228e1c0a4f358a05c7e247cb7d09bc0a0042d454\"" Mar 7 02:03:50.759702 containerd[1473]: time="2026-03-07T02:03:50.758058753Z" level=info msg="StartContainer for \"07a2cb6ab86157d1f6af7069228e1c0a4f358a05c7e247cb7d09bc0a0042d454\"" Mar 7 02:03:50.985648 systemd[1]: Started cri-containerd-07a2cb6ab86157d1f6af7069228e1c0a4f358a05c7e247cb7d09bc0a0042d454.scope - libcontainer container 07a2cb6ab86157d1f6af7069228e1c0a4f358a05c7e247cb7d09bc0a0042d454. Mar 7 02:03:51.285086 containerd[1473]: time="2026-03-07T02:03:51.282339002Z" level=info msg="StartContainer for \"07a2cb6ab86157d1f6af7069228e1c0a4f358a05c7e247cb7d09bc0a0042d454\" returns successfully" Mar 7 02:03:52.309949 kubelet[2582]: E0307 02:03:52.309362 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:53.324172 kubelet[2582]: E0307 02:03:53.324127 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:55.176782 kubelet[2582]: I0307 02:03:55.176294 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-djl57" podStartSLOduration=12.175799734 podStartE2EDuration="12.175799734s" podCreationTimestamp="2026-03-07 02:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:03:52.444724726 +0000 UTC m=+10.293711884" watchObservedRunningTime="2026-03-07 02:03:55.175799734 +0000 UTC m=+13.024786932" Mar 7 02:03:55.225480 kubelet[2582]: I0307 02:03:55.225361 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-xtables-lock\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.225480 kubelet[2582]: I0307 02:03:55.225473 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-cni\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.225480 kubelet[2582]: I0307 02:03:55.225498 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-flannel-cfg\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.225480 kubelet[2582]: I0307 02:03:55.225517 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58q2b\" (UniqueName: \"kubernetes.io/projected/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-kube-api-access-58q2b\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.225480 kubelet[2582]: I0307 02:03:55.225589 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-run\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.230706 kubelet[2582]: I0307 02:03:55.225612 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343-cni-plugin\") pod \"kube-flannel-ds-rxfpk\" (UID: \"58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343\") " pod="kube-flannel/kube-flannel-ds-rxfpk" Mar 7 02:03:55.318620 systemd[1]: Created slice kubepods-burstable-pod58ac9d2b_6858_4fa0_8f8c_8d9f07fd5343.slice - libcontainer container kubepods-burstable-pod58ac9d2b_6858_4fa0_8f8c_8d9f07fd5343.slice. Mar 7 02:03:56.039342 kubelet[2582]: E0307 02:03:56.038314 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:56.183613 containerd[1473]: time="2026-03-07T02:03:56.173081403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rxfpk,Uid:58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343,Namespace:kube-flannel,Attempt:0,}" Mar 7 02:03:56.687723 containerd[1473]: time="2026-03-07T02:03:56.686272391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:56.687723 containerd[1473]: time="2026-03-07T02:03:56.686409866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:56.687723 containerd[1473]: time="2026-03-07T02:03:56.686429242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:56.687723 containerd[1473]: time="2026-03-07T02:03:56.686563351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:56.717195 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 7 02:03:56.745706 sshd[1598]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:56.783297 systemd[1]: run-containerd-runc-k8s.io-5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e-runc.2T0mgd.mount: Deactivated successfully. Mar 7 02:03:56.800073 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:46402.service: Deactivated successfully. Mar 7 02:03:56.807630 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 02:03:56.808149 systemd[1]: session-5.scope: Consumed 11.935s CPU time, 163.0M memory peak, 0B memory swap peak. Mar 7 02:03:56.819414 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 7 02:03:56.858227 systemd[1]: Started cri-containerd-5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e.scope - libcontainer container 5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e. Mar 7 02:03:56.877608 systemd-logind[1453]: Removed session 5. Mar 7 02:03:57.097520 containerd[1473]: time="2026-03-07T02:03:57.097319384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rxfpk,Uid:58ac9d2b-6858-4fa0-8f8c-8d9f07fd5343,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\"" Mar 7 02:03:57.107048 kubelet[2582]: E0307 02:03:57.102805 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:57.114981 containerd[1473]: time="2026-03-07T02:03:57.110546266Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 7 02:03:59.782602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2481156513.mount: Deactivated successfully. Mar 7 02:04:00.204513 containerd[1473]: time="2026-03-07T02:04:00.202102279Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:00.220542 containerd[1473]: time="2026-03-07T02:04:00.217991274Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 7 02:04:00.234536 containerd[1473]: time="2026-03-07T02:04:00.233154675Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:00.247350 containerd[1473]: time="2026-03-07T02:04:00.244421536Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:00.247350 containerd[1473]: time="2026-03-07T02:04:00.246514956Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.135911996s" Mar 7 02:04:00.247350 containerd[1473]: time="2026-03-07T02:04:00.246560890Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 7 02:04:00.286053 containerd[1473]: time="2026-03-07T02:04:00.285651640Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 7 02:04:00.385174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621509945.mount: Deactivated successfully. Mar 7 02:04:00.433576 containerd[1473]: time="2026-03-07T02:04:00.426274716Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04\"" Mar 7 02:04:00.433576 containerd[1473]: time="2026-03-07T02:04:00.430689034Z" level=info msg="StartContainer for \"c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04\"" Mar 7 02:04:00.866534 systemd[1]: Started cri-containerd-c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04.scope - libcontainer container c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04. Mar 7 02:04:01.322699 containerd[1473]: time="2026-03-07T02:04:01.318945979Z" level=info msg="StartContainer for \"c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04\" returns successfully" Mar 7 02:04:01.329695 systemd[1]: cri-containerd-c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04.scope: Deactivated successfully. Mar 7 02:04:01.947451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04-rootfs.mount: Deactivated successfully. Mar 7 02:04:01.963479 kubelet[2582]: E0307 02:04:01.962583 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:02.098663 containerd[1473]: time="2026-03-07T02:04:02.098591931Z" level=info msg="shim disconnected" id=c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04 namespace=k8s.io Mar 7 02:04:02.099217 containerd[1473]: time="2026-03-07T02:04:02.099069896Z" level=warning msg="cleaning up after shim disconnected" id=c900c46f26572e86f0c93405c25251564e438c3965badce8ade30a4158191f04 namespace=k8s.io Mar 7 02:04:02.111628 containerd[1473]: time="2026-03-07T02:04:02.111578872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:02.984976 kubelet[2582]: E0307 02:04:02.983709 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:03.007170 containerd[1473]: time="2026-03-07T02:04:03.001220435Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 7 02:04:21.754177 containerd[1473]: time="2026-03-07T02:04:21.750676866Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:21.767012 containerd[1473]: time="2026-03-07T02:04:21.766757041Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 7 02:04:21.783936 containerd[1473]: time="2026-03-07T02:04:21.782301612Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:21.813602 containerd[1473]: time="2026-03-07T02:04:21.811595600Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:04:21.819922 containerd[1473]: time="2026-03-07T02:04:21.818451585Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 18.817178631s" Mar 7 02:04:21.819922 containerd[1473]: time="2026-03-07T02:04:21.818489204Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 7 02:04:21.855241 containerd[1473]: time="2026-03-07T02:04:21.852753307Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 02:04:21.948556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426694220.mount: Deactivated successfully. Mar 7 02:04:22.013681 containerd[1473]: time="2026-03-07T02:04:22.010083404Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6\"" Mar 7 02:04:22.020041 containerd[1473]: time="2026-03-07T02:04:22.017735111Z" level=info msg="StartContainer for \"c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6\"" Mar 7 02:04:22.613369 systemd[1]: Started cri-containerd-c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6.scope - libcontainer container c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6. Mar 7 02:04:22.772778 systemd[1]: cri-containerd-c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6.scope: Deactivated successfully. Mar 7 02:04:22.777262 kubelet[2582]: I0307 02:04:22.776384 2582 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 02:04:22.788466 containerd[1473]: time="2026-03-07T02:04:22.785254064Z" level=info msg="StartContainer for \"c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6\" returns successfully" Mar 7 02:04:22.945306 systemd[1]: Created slice kubepods-burstable-pod75886309_34ac_4a36_bdd3_040d6e77c36d.slice - libcontainer container kubepods-burstable-pod75886309_34ac_4a36_bdd3_040d6e77c36d.slice. Mar 7 02:04:22.969548 systemd[1]: Created slice kubepods-burstable-podce585a00_9440_48b2_b957_1f74eb5ad7a3.slice - libcontainer container kubepods-burstable-podce585a00_9440_48b2_b957_1f74eb5ad7a3.slice. Mar 7 02:04:23.022297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6-rootfs.mount: Deactivated successfully. Mar 7 02:04:23.062686 kubelet[2582]: I0307 02:04:23.062185 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75886309-34ac-4a36-bdd3-040d6e77c36d-config-volume\") pod \"coredns-7d764666f9-kfstk\" (UID: \"75886309-34ac-4a36-bdd3-040d6e77c36d\") " pod="kube-system/coredns-7d764666f9-kfstk" Mar 7 02:04:23.062686 kubelet[2582]: I0307 02:04:23.062298 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdz5w\" (UniqueName: \"kubernetes.io/projected/ce585a00-9440-48b2-b957-1f74eb5ad7a3-kube-api-access-pdz5w\") pod \"coredns-7d764666f9-fdtcb\" (UID: \"ce585a00-9440-48b2-b957-1f74eb5ad7a3\") " pod="kube-system/coredns-7d764666f9-fdtcb" Mar 7 02:04:23.062686 kubelet[2582]: I0307 02:04:23.062335 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qwk\" (UniqueName: \"kubernetes.io/projected/75886309-34ac-4a36-bdd3-040d6e77c36d-kube-api-access-66qwk\") pod \"coredns-7d764666f9-kfstk\" (UID: \"75886309-34ac-4a36-bdd3-040d6e77c36d\") " pod="kube-system/coredns-7d764666f9-kfstk" Mar 7 02:04:23.062686 kubelet[2582]: I0307 02:04:23.062364 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce585a00-9440-48b2-b957-1f74eb5ad7a3-config-volume\") pod \"coredns-7d764666f9-fdtcb\" (UID: \"ce585a00-9440-48b2-b957-1f74eb5ad7a3\") " pod="kube-system/coredns-7d764666f9-fdtcb" Mar 7 02:04:23.224184 containerd[1473]: time="2026-03-07T02:04:23.223922482Z" level=info msg="shim disconnected" id=c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6 namespace=k8s.io Mar 7 02:04:23.224184 containerd[1473]: time="2026-03-07T02:04:23.224031354Z" level=warning msg="cleaning up after shim disconnected" id=c7b2924142cdc0b27eb5980e749e033b41b57ca1e2febd65d2eab8f9fecfb0a6 namespace=k8s.io Mar 7 02:04:23.224184 containerd[1473]: time="2026-03-07T02:04:23.224046692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:23.264223 kubelet[2582]: E0307 02:04:23.264065 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:23.271383 containerd[1473]: time="2026-03-07T02:04:23.269706449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kfstk,Uid:75886309-34ac-4a36-bdd3-040d6e77c36d,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:23.307023 kubelet[2582]: E0307 02:04:23.304145 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:23.307559 containerd[1473]: time="2026-03-07T02:04:23.307517270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fdtcb,Uid:ce585a00-9440-48b2-b957-1f74eb5ad7a3,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:23.521974 containerd[1473]: time="2026-03-07T02:04:23.521628041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kfstk,Uid:75886309-34ac-4a36-bdd3-040d6e77c36d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 02:04:23.525620 kubelet[2582]: E0307 02:04:23.525570 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 02:04:23.528988 kubelet[2582]: E0307 02:04:23.528957 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kfstk" Mar 7 02:04:23.529252 kubelet[2582]: E0307 02:04:23.529219 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kfstk" Mar 7 02:04:23.529494 kubelet[2582]: E0307 02:04:23.529454 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kfstk_kube-system(75886309-34ac-4a36-bdd3-040d6e77c36d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kfstk_kube-system(75886309-34ac-4a36-bdd3-040d6e77c36d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-kfstk" podUID="75886309-34ac-4a36-bdd3-040d6e77c36d" Mar 7 02:04:23.540291 containerd[1473]: time="2026-03-07T02:04:23.539725302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fdtcb,Uid:ce585a00-9440-48b2-b957-1f74eb5ad7a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4d7f2ef7f18d9e732268a995c11d8cf29008054654cd1f30f0bb9dd87e6870a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 02:04:23.540450 kubelet[2582]: E0307 02:04:23.540374 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d7f2ef7f18d9e732268a995c11d8cf29008054654cd1f30f0bb9dd87e6870a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 02:04:23.540520 kubelet[2582]: E0307 02:04:23.540438 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d7f2ef7f18d9e732268a995c11d8cf29008054654cd1f30f0bb9dd87e6870a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-fdtcb" Mar 7 02:04:23.540520 kubelet[2582]: E0307 02:04:23.540474 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d7f2ef7f18d9e732268a995c11d8cf29008054654cd1f30f0bb9dd87e6870a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-fdtcb" Mar 7 02:04:23.541048 kubelet[2582]: E0307 02:04:23.540529 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-fdtcb_kube-system(ce585a00-9440-48b2-b957-1f74eb5ad7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-fdtcb_kube-system(ce585a00-9440-48b2-b957-1f74eb5ad7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d7f2ef7f18d9e732268a995c11d8cf29008054654cd1f30f0bb9dd87e6870a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-fdtcb" podUID="ce585a00-9440-48b2-b957-1f74eb5ad7a3" Mar 7 02:04:23.726389 kubelet[2582]: E0307 02:04:23.725043 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:23.760406 containerd[1473]: time="2026-03-07T02:04:23.760202850Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 7 02:04:23.843279 containerd[1473]: time="2026-03-07T02:04:23.841999471Z" level=info msg="CreateContainer within sandbox \"5225557e12fce8104cc664e2c35469b2cd2779b57c89ffa24f307648f9f4412e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"77b4035da3ec4e7596224c5710380f76f5bbbee34425a54e06f4c0700495b678\"" Mar 7 02:04:23.845254 containerd[1473]: time="2026-03-07T02:04:23.843404738Z" level=info msg="StartContainer for \"77b4035da3ec4e7596224c5710380f76f5bbbee34425a54e06f4c0700495b678\"" Mar 7 02:04:23.991451 systemd[1]: Started cri-containerd-77b4035da3ec4e7596224c5710380f76f5bbbee34425a54e06f4c0700495b678.scope - libcontainer container 77b4035da3ec4e7596224c5710380f76f5bbbee34425a54e06f4c0700495b678. Mar 7 02:04:24.027481 systemd[1]: run-netns-cni\x2d29acdcbb\x2d962e\x2d6149\x2d26c2\x2dc37df26a2768.mount: Deactivated successfully. Mar 7 02:04:24.028209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1119720ead9bbc264ce019adff07dfc412563ba4b6c8df2f3e762e1c6dfb2d6-shm.mount: Deactivated successfully. Mar 7 02:04:24.120019 containerd[1473]: time="2026-03-07T02:04:24.117972723Z" level=info msg="StartContainer for \"77b4035da3ec4e7596224c5710380f76f5bbbee34425a54e06f4c0700495b678\" returns successfully" Mar 7 02:04:24.747931 kubelet[2582]: E0307 02:04:24.747640 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:24.794947 kubelet[2582]: I0307 02:04:24.789758 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rxfpk" podStartSLOduration=3.158072619 podStartE2EDuration="29.789740185s" podCreationTimestamp="2026-03-07 02:03:55 +0000 UTC" firstStartedPulling="2026-03-07 02:03:57.110015883 +0000 UTC m=+14.959003042" lastFinishedPulling="2026-03-07 02:04:23.741683449 +0000 UTC m=+41.590670608" observedRunningTime="2026-03-07 02:04:24.785565398 +0000 UTC m=+42.634552566" watchObservedRunningTime="2026-03-07 02:04:24.789740185 +0000 UTC m=+42.638727353" Mar 7 02:04:25.599400 systemd-networkd[1378]: flannel.1: Link UP Mar 7 02:04:25.599418 systemd-networkd[1378]: flannel.1: Gained carrier Mar 7 02:04:25.782572 kubelet[2582]: E0307 02:04:25.777031 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:27.373554 systemd-networkd[1378]: flannel.1: Gained IPv6LL Mar 7 02:04:34.884781 kubelet[2582]: E0307 02:04:34.884051 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:34.892216 containerd[1473]: time="2026-03-07T02:04:34.892058147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fdtcb,Uid:ce585a00-9440-48b2-b957-1f74eb5ad7a3,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:34.915288 kubelet[2582]: E0307 02:04:34.910538 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:34.924708 containerd[1473]: time="2026-03-07T02:04:34.918889045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kfstk,Uid:75886309-34ac-4a36-bdd3-040d6e77c36d,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:35.188416 systemd-networkd[1378]: cni0: Link UP Mar 7 02:04:35.188431 systemd-networkd[1378]: cni0: Gained carrier Mar 7 02:04:35.229167 systemd-networkd[1378]: cni0: Lost carrier Mar 7 02:04:35.342561 systemd-networkd[1378]: vethe3922dcb: Link UP Mar 7 02:04:35.367557 kernel: cni0: port 1(vethe3922dcb) entered blocking state Mar 7 02:04:35.367722 kernel: cni0: port 1(vethe3922dcb) entered disabled state Mar 7 02:04:35.367757 kernel: vethe3922dcb: entered allmulticast mode Mar 7 02:04:35.390523 kernel: vethe3922dcb: entered promiscuous mode Mar 7 02:04:35.400889 kernel: cni0: port 1(vethe3922dcb) entered blocking state Mar 7 02:04:35.434417 kernel: cni0: port 1(vethe3922dcb) entered forwarding state Mar 7 02:04:35.464579 kernel: cni0: port 1(vethe3922dcb) entered disabled state Mar 7 02:04:35.464743 kernel: cni0: port 2(veth54abf110) entered blocking state Mar 7 02:04:35.505284 kernel: cni0: port 2(veth54abf110) entered disabled state Mar 7 02:04:35.505449 kernel: veth54abf110: entered allmulticast mode Mar 7 02:04:35.528552 kernel: veth54abf110: entered promiscuous mode Mar 7 02:04:35.547130 kernel: cni0: port 2(veth54abf110) entered blocking state Mar 7 02:04:35.547288 kernel: cni0: port 2(veth54abf110) entered forwarding state Mar 7 02:04:35.567520 systemd-networkd[1378]: veth54abf110: Link UP Mar 7 02:04:35.594340 kernel: cni0: port 2(veth54abf110) entered disabled state Mar 7 02:04:35.658924 kernel: cni0: port 1(vethe3922dcb) entered blocking state Mar 7 02:04:35.659296 kernel: cni0: port 1(vethe3922dcb) entered forwarding state Mar 7 02:04:35.661564 systemd-networkd[1378]: vethe3922dcb: Gained carrier Mar 7 02:04:35.664326 systemd-networkd[1378]: cni0: Gained carrier Mar 7 02:04:35.669445 kernel: cni0: port 2(veth54abf110) entered blocking state Mar 7 02:04:35.670632 kernel: cni0: port 2(veth54abf110) entered forwarding state Mar 7 02:04:35.684346 systemd-networkd[1378]: veth54abf110: Gained carrier Mar 7 02:04:35.686125 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012840), "name":"cbr0", "type":"bridge"} Mar 7 02:04:35.686125 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Mar 7 02:04:35.731799 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Mar 7 02:04:35.731799 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 7 02:04:35.731799 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Mar 7 02:04:35.902174 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T02:04:35.901714246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:35.902174 containerd[1473]: time="2026-03-07T02:04:35.901787452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:35.902174 containerd[1473]: time="2026-03-07T02:04:35.901805024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:35.909285 containerd[1473]: time="2026-03-07T02:04:35.907948830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:35.909285 containerd[1473]: time="2026-03-07T02:04:35.908075606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:35.909285 containerd[1473]: time="2026-03-07T02:04:35.908114859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:35.909285 containerd[1473]: time="2026-03-07T02:04:35.908255168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:35.909285 containerd[1473]: time="2026-03-07T02:04:35.905380943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:36.039495 systemd[1]: Started cri-containerd-6f294cf3590e23a26e4199164b0f6cc20319e6296f32083862a17044984d01a1.scope - libcontainer container 6f294cf3590e23a26e4199164b0f6cc20319e6296f32083862a17044984d01a1. Mar 7 02:04:36.054657 systemd[1]: Started cri-containerd-8532ec99ab79426f2c30ffa889b5ae0346e0887ac25aad0e583b07a425cf1fba.scope - libcontainer container 8532ec99ab79426f2c30ffa889b5ae0346e0887ac25aad0e583b07a425cf1fba. Mar 7 02:04:36.167116 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:04:36.168942 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:04:36.322263 containerd[1473]: time="2026-03-07T02:04:36.320342117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fdtcb,Uid:ce585a00-9440-48b2-b957-1f74eb5ad7a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f294cf3590e23a26e4199164b0f6cc20319e6296f32083862a17044984d01a1\"" Mar 7 02:04:36.323728 kubelet[2582]: E0307 02:04:36.323635 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:36.350435 containerd[1473]: time="2026-03-07T02:04:36.350146790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kfstk,Uid:75886309-34ac-4a36-bdd3-040d6e77c36d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8532ec99ab79426f2c30ffa889b5ae0346e0887ac25aad0e583b07a425cf1fba\"" Mar 7 02:04:36.350435 containerd[1473]: time="2026-03-07T02:04:36.350327666Z" level=info msg="CreateContainer within sandbox \"6f294cf3590e23a26e4199164b0f6cc20319e6296f32083862a17044984d01a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:04:36.352213 kubelet[2582]: E0307 02:04:36.351933 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:36.366116 containerd[1473]: time="2026-03-07T02:04:36.365744449Z" level=info msg="CreateContainer within sandbox \"8532ec99ab79426f2c30ffa889b5ae0346e0887ac25aad0e583b07a425cf1fba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:04:36.480388 containerd[1473]: time="2026-03-07T02:04:36.480331099Z" level=info msg="CreateContainer within sandbox \"6f294cf3590e23a26e4199164b0f6cc20319e6296f32083862a17044984d01a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa36e9f92648fc3f42e022de03ac2145dcfb593327ed984003a8dc1aab37e67b\"" Mar 7 02:04:36.487097 containerd[1473]: time="2026-03-07T02:04:36.486310928Z" level=info msg="StartContainer for \"aa36e9f92648fc3f42e022de03ac2145dcfb593327ed984003a8dc1aab37e67b\"" Mar 7 02:04:36.518514 containerd[1473]: time="2026-03-07T02:04:36.518451239Z" level=info msg="CreateContainer within sandbox \"8532ec99ab79426f2c30ffa889b5ae0346e0887ac25aad0e583b07a425cf1fba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f4b43ab1331c3d41ed1f65fcba828c3ff69cf1e44d95ae0abb310c3f5ae7b0d\"" Mar 7 02:04:36.534940 containerd[1473]: time="2026-03-07T02:04:36.534120386Z" level=info msg="StartContainer for \"2f4b43ab1331c3d41ed1f65fcba828c3ff69cf1e44d95ae0abb310c3f5ae7b0d\"" Mar 7 02:04:36.615391 systemd[1]: Started cri-containerd-aa36e9f92648fc3f42e022de03ac2145dcfb593327ed984003a8dc1aab37e67b.scope - libcontainer container aa36e9f92648fc3f42e022de03ac2145dcfb593327ed984003a8dc1aab37e67b. Mar 7 02:04:36.768347 systemd[1]: Started cri-containerd-2f4b43ab1331c3d41ed1f65fcba828c3ff69cf1e44d95ae0abb310c3f5ae7b0d.scope - libcontainer container 2f4b43ab1331c3d41ed1f65fcba828c3ff69cf1e44d95ae0abb310c3f5ae7b0d. Mar 7 02:04:36.916714 systemd-networkd[1378]: vethe3922dcb: Gained IPv6LL Mar 7 02:04:36.985943 containerd[1473]: time="2026-03-07T02:04:36.983940974Z" level=info msg="StartContainer for \"aa36e9f92648fc3f42e022de03ac2145dcfb593327ed984003a8dc1aab37e67b\" returns successfully" Mar 7 02:04:37.056254 kubelet[2582]: E0307 02:04:37.051672 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:37.091034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905912248.mount: Deactivated successfully. Mar 7 02:04:37.163237 systemd-networkd[1378]: cni0: Gained IPv6LL Mar 7 02:04:37.174955 containerd[1473]: time="2026-03-07T02:04:37.174219973Z" level=info msg="StartContainer for \"2f4b43ab1331c3d41ed1f65fcba828c3ff69cf1e44d95ae0abb310c3f5ae7b0d\" returns successfully" Mar 7 02:04:37.203320 kubelet[2582]: I0307 02:04:37.201424 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fdtcb" podStartSLOduration=54.201401673 podStartE2EDuration="54.201401673s" podCreationTimestamp="2026-03-07 02:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:04:37.187634516 +0000 UTC m=+55.036621694" watchObservedRunningTime="2026-03-07 02:04:37.201401673 +0000 UTC m=+55.050388912" Mar 7 02:04:37.547779 systemd-networkd[1378]: veth54abf110: Gained IPv6LL Mar 7 02:04:38.127405 kubelet[2582]: E0307 02:04:38.125088 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.127405 kubelet[2582]: E0307 02:04:38.125763 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.289114 kubelet[2582]: I0307 02:04:38.284773 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kfstk" podStartSLOduration=55.284753768 podStartE2EDuration="55.284753768s" podCreationTimestamp="2026-03-07 02:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:04:38.191458578 +0000 UTC m=+56.040445786" watchObservedRunningTime="2026-03-07 02:04:38.284753768 +0000 UTC m=+56.133740925" Mar 7 02:04:39.129254 kubelet[2582]: E0307 02:04:39.129138 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:39.129933 kubelet[2582]: E0307 02:04:39.129737 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:40.135724 kubelet[2582]: E0307 02:04:40.135460 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:49.886721 kubelet[2582]: E0307 02:04:49.878229 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:55.873651 kubelet[2582]: E0307 02:04:55.863144 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:56.862317 kubelet[2582]: E0307 02:04:56.858553 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:57.861224 kubelet[2582]: E0307 02:04:57.859205 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:25.673243 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:48640.service - OpenSSH per-connection server daemon (10.0.0.1:48640). Mar 7 02:05:25.892510 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 48640 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:25.899244 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:25.929227 systemd-logind[1453]: New session 6 of user core. Mar 7 02:05:25.946015 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 02:05:26.417985 sshd[3719]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:26.429014 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:48640.service: Deactivated successfully. Mar 7 02:05:26.433522 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 02:05:26.437542 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 7 02:05:26.445557 systemd-logind[1453]: Removed session 6. Mar 7 02:05:31.466042 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:32934.service - OpenSSH per-connection server daemon (10.0.0.1:32934). Mar 7 02:05:31.546265 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 32934 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:31.559543 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:31.583000 systemd-logind[1453]: New session 7 of user core. Mar 7 02:05:31.602017 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 02:05:31.971095 sshd[3757]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:31.982422 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:32934.service: Deactivated successfully. Mar 7 02:05:31.986125 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 02:05:31.990296 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 7 02:05:31.995784 systemd-logind[1453]: Removed session 7. Mar 7 02:05:37.018344 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:32942.service - OpenSSH per-connection server daemon (10.0.0.1:32942). Mar 7 02:05:37.121488 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 32942 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:37.126506 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:37.165478 systemd-logind[1453]: New session 8 of user core. Mar 7 02:05:37.192655 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:05:37.582037 sshd[3792]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:37.598317 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:32942.service: Deactivated successfully. Mar 7 02:05:37.606694 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:05:37.630123 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:05:37.635671 systemd-logind[1453]: Removed session 8. Mar 7 02:05:42.623473 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:42202.service - OpenSSH per-connection server daemon (10.0.0.1:42202). Mar 7 02:05:42.687359 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 42202 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:42.691645 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:42.710379 systemd-logind[1453]: New session 9 of user core. Mar 7 02:05:42.713634 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:05:42.914378 sshd[3833]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:42.928628 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:42202.service: Deactivated successfully. Mar 7 02:05:42.932950 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:05:42.937265 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:05:42.942686 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:42218.service - OpenSSH per-connection server daemon (10.0.0.1:42218). Mar 7 02:05:42.946376 systemd-logind[1453]: Removed session 9. Mar 7 02:05:43.023351 sshd[3848]: Accepted publickey for core from 10.0.0.1 port 42218 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:43.025092 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:43.039991 systemd-logind[1453]: New session 10 of user core. Mar 7 02:05:43.045261 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:05:43.289516 sshd[3848]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:43.308905 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:42218.service: Deactivated successfully. Mar 7 02:05:43.312126 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:05:43.318251 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:05:43.332644 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:42234.service - OpenSSH per-connection server daemon (10.0.0.1:42234). Mar 7 02:05:43.340617 systemd-logind[1453]: Removed session 10. Mar 7 02:05:43.392340 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 42234 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:43.396049 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:43.411972 systemd-logind[1453]: New session 11 of user core. Mar 7 02:05:43.425408 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:05:43.648277 sshd[3875]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:43.659404 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:42234.service: Deactivated successfully. Mar 7 02:05:43.665449 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:05:43.667199 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:05:43.674419 systemd-logind[1453]: Removed session 11. Mar 7 02:05:48.673055 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:42236.service - OpenSSH per-connection server daemon (10.0.0.1:42236). Mar 7 02:05:48.740629 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 42236 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:48.743428 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:48.758360 systemd-logind[1453]: New session 12 of user core. Mar 7 02:05:48.768275 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:05:49.006429 sshd[3911]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:49.028354 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:42236.service: Deactivated successfully. Mar 7 02:05:49.038739 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:05:49.042452 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:05:49.048196 systemd-logind[1453]: Removed session 12. Mar 7 02:05:53.859153 kubelet[2582]: E0307 02:05:53.859110 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:54.053512 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:52108.service - OpenSSH per-connection server daemon (10.0.0.1:52108). Mar 7 02:05:54.166341 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 52108 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:54.164508 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:54.187424 systemd-logind[1453]: New session 13 of user core. Mar 7 02:05:54.200374 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:05:54.427990 sshd[3952]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:54.434714 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:52108.service: Deactivated successfully. Mar 7 02:05:54.438372 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:05:54.442760 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:05:54.445676 systemd-logind[1453]: Removed session 13. Mar 7 02:05:55.859037 kubelet[2582]: E0307 02:05:55.858373 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:58.874166 kubelet[2582]: E0307 02:05:58.872075 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:59.532636 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Mar 7 02:05:59.854398 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:59.870594 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:59.939990 systemd-logind[1453]: New session 14 of user core. Mar 7 02:06:00.022576 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:06:00.916799 kubelet[2582]: E0307 02:06:00.895686 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:01.979077 sshd[3987]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:02.126018 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:52114.service: Deactivated successfully. Mar 7 02:06:02.179727 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:06:02.180359 systemd[1]: session-14.scope: Consumed 1.875s CPU time. Mar 7 02:06:02.218679 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:06:02.239233 systemd-logind[1453]: Removed session 14. Mar 7 02:06:06.868616 kubelet[2582]: E0307 02:06:06.868291 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:07.290093 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:58866.service - OpenSSH per-connection server daemon (10.0.0.1:58866). Mar 7 02:06:08.174269 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 58866 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:08.247370 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:08.491475 systemd-logind[1453]: New session 15 of user core. Mar 7 02:06:08.552123 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:06:11.150761 sshd[4023]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:11.338371 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:58866.service: Deactivated successfully. Mar 7 02:06:11.439964 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:06:11.440729 systemd[1]: session-15.scope: Consumed 1.658s CPU time. Mar 7 02:06:11.541215 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:06:11.576294 systemd-logind[1453]: Removed session 15. Mar 7 02:06:11.984178 kubelet[2582]: E0307 02:06:11.983140 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:11.987710 kubelet[2582]: E0307 02:06:11.985339 2582 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:16.462508 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:56636.service - OpenSSH per-connection server daemon (10.0.0.1:56636). Mar 7 02:06:17.035689 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 56636 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:17.091775 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:17.274359 systemd-logind[1453]: New session 16 of user core. Mar 7 02:06:17.289142 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:06:19.770576 sshd[4078]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:19.868711 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:56636.service: Deactivated successfully. Mar 7 02:06:19.887715 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:06:19.895375 systemd[1]: session-16.scope: Consumed 1.486s CPU time. Mar 7 02:06:19.941691 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:06:19.963282 systemd-logind[1453]: Removed session 16. Mar 7 02:06:24.770524 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:39882.service - OpenSSH per-connection server daemon (10.0.0.1:39882). Mar 7 02:06:24.823670 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 39882 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:24.826734 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:24.836208 systemd-logind[1453]: New session 17 of user core. Mar 7 02:06:24.847322 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:06:24.999633 sshd[4126]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:25.009309 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:39882.service: Deactivated successfully. Mar 7 02:06:25.011376 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:06:25.013583 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:06:25.021804 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:39886.service - OpenSSH per-connection server daemon (10.0.0.1:39886). Mar 7 02:06:25.024033 systemd-logind[1453]: Removed session 17. Mar 7 02:06:25.076255 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 39886 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:25.078248 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:25.085937 systemd-logind[1453]: New session 18 of user core. Mar 7 02:06:25.091539 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:06:25.441595 sshd[4141]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:25.462670 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:39886.service: Deactivated successfully. Mar 7 02:06:25.466270 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:06:25.470314 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:06:25.483621 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:39894.service - OpenSSH per-connection server daemon (10.0.0.1:39894). Mar 7 02:06:25.486237 systemd-logind[1453]: Removed session 18. Mar 7 02:06:25.530655 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 39894 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:25.533276 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:25.542582 systemd-logind[1453]: New session 19 of user core. Mar 7 02:06:25.550391 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:06:26.308140 sshd[4155]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:26.320231 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:39894.service: Deactivated successfully. Mar 7 02:06:26.323591 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:06:26.331246 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:06:26.338476 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:39904.service - OpenSSH per-connection server daemon (10.0.0.1:39904). Mar 7 02:06:26.345414 systemd-logind[1453]: Removed session 19. Mar 7 02:06:26.409409 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 39904 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:26.412351 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:26.419691 systemd-logind[1453]: New session 20 of user core. Mar 7 02:06:26.429436 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:06:26.759527 sshd[4176]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:26.767768 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:39904.service: Deactivated successfully. Mar 7 02:06:26.770340 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:06:26.773977 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:06:26.787511 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:39906.service - OpenSSH per-connection server daemon (10.0.0.1:39906). Mar 7 02:06:26.789420 systemd-logind[1453]: Removed session 20. Mar 7 02:06:26.832575 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 39906 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:26.835293 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:26.844564 systemd-logind[1453]: New session 21 of user core. Mar 7 02:06:26.858350 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:06:27.045220 sshd[4203]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:27.050758 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:39906.service: Deactivated successfully. Mar 7 02:06:27.053795 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:06:27.055711 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:06:27.058292 systemd-logind[1453]: Removed session 21. Mar 7 02:06:32.100565 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:50712.service - OpenSSH per-connection server daemon (10.0.0.1:50712). Mar 7 02:06:32.165193 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 50712 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:32.169481 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:32.201365 systemd-logind[1453]: New session 22 of user core. Mar 7 02:06:32.212108 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 02:06:32.483807 sshd[4240]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:32.489672 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:50712.service: Deactivated successfully. Mar 7 02:06:32.493800 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 02:06:32.498322 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Mar 7 02:06:32.504640 systemd-logind[1453]: Removed session 22. Mar 7 02:06:37.536437 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:50738.service - OpenSSH per-connection server daemon (10.0.0.1:50738). Mar 7 02:06:37.656723 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 50738 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:37.658460 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:37.683307 systemd-logind[1453]: New session 23 of user core. Mar 7 02:06:37.691636 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 02:06:37.991007 sshd[4277]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:38.004456 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:50738.service: Deactivated successfully. Mar 7 02:06:38.011465 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 02:06:38.017613 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Mar 7 02:06:38.025941 systemd-logind[1453]: Removed session 23. Mar 7 02:06:43.052264 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:52840.service - OpenSSH per-connection server daemon (10.0.0.1:52840). Mar 7 02:06:43.184119 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 52840 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:43.188719 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:43.204714 systemd-logind[1453]: New session 24 of user core. Mar 7 02:06:43.221293 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 02:06:43.511228 sshd[4311]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:43.525274 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:52840.service: Deactivated successfully. Mar 7 02:06:43.530682 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 02:06:43.540329 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Mar 7 02:06:43.545480 systemd-logind[1453]: Removed session 24. Mar 7 02:06:48.580383 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:52852.service - OpenSSH per-connection server daemon (10.0.0.1:52852). Mar 7 02:06:48.676361 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 52852 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:48.679098 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:48.734331 systemd-logind[1453]: New session 25 of user core. Mar 7 02:06:48.766368 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 02:06:49.023775 sshd[4347]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:49.032804 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:52852.service: Deactivated successfully. Mar 7 02:06:49.037695 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 02:06:49.040312 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Mar 7 02:06:49.043995 systemd-logind[1453]: Removed session 25.