Mar 7 01:46:41.770301 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:46:41.770404 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:41.770430 kernel: BIOS-provided physical RAM map: Mar 7 01:46:41.770439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:46:41.770449 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:46:41.770458 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:46:41.770469 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:46:41.770478 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:46:41.770488 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:46:41.770504 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:46:41.770514 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:46:41.770524 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:46:41.770534 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:46:41.770544 kernel: NX (Execute Disable) protection: active Mar 7 01:46:41.770556 kernel: APIC: Static calls initialized Mar 7 01:46:41.770582 kernel: SMBIOS 2.8 present. Mar 7 01:46:41.770594 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:46:41.770607 kernel: Hypervisor detected: KVM Mar 7 01:46:41.770616 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:46:41.770626 kernel: kvm-clock: using sched offset of 16354770703 cycles Mar 7 01:46:41.770637 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:46:41.770648 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:46:41.770658 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:46:41.770669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:46:41.770684 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:46:41.770696 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:46:41.770707 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:46:41.770719 kernel: Using GB pages for direct mapping Mar 7 01:46:41.770730 kernel: ACPI: Early table checksum verification disabled Mar 7 01:46:41.770741 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:46:41.770751 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770762 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770773 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770791 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:46:41.770802 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770813 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770824 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770835 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:41.770961 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:46:41.770972 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:46:41.770989 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:46:41.771004 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:46:41.771014 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:46:41.771026 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:46:41.771038 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:46:41.771109 kernel: No NUMA configuration found Mar 7 01:46:41.771125 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:46:41.771144 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:46:41.771155 kernel: Zone ranges: Mar 7 01:46:41.771166 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:46:41.771179 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:46:41.771190 kernel: Normal empty Mar 7 01:46:41.771202 kernel: Movable zone start for each node Mar 7 01:46:41.771213 kernel: Early memory node ranges Mar 7 01:46:41.771223 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:46:41.771234 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:46:41.771249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:46:41.771258 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:46:41.771268 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:46:41.771277 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:46:41.771287 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:46:41.771296 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:46:41.771306 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:46:41.771315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:46:41.771326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:46:41.771341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:46:41.771352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:46:41.771362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:46:41.771373 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:46:41.771382 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:46:41.771392 kernel: TSC deadline timer available Mar 7 01:46:41.771401 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:46:41.771411 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:46:41.771420 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:46:41.771433 kernel: kvm-guest: setup PV sched yield Mar 7 01:46:41.771443 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:46:41.771453 kernel: Booting paravirtualized kernel on KVM Mar 7 01:46:41.771465 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:46:41.771477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:46:41.771488 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:46:41.771500 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:46:41.771511 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:46:41.771521 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:46:41.771536 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:46:41.771549 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:41.771560 kernel: random: crng init done Mar 7 01:46:41.771571 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:46:41.771582 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:46:41.771593 kernel: Fallback order for Node 0: 0 Mar 7 01:46:41.771603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:46:41.771614 kernel: Policy zone: DMA32 Mar 7 01:46:41.771628 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:46:41.771639 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 7 01:46:41.771650 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:46:41.771661 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:46:41.771672 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:46:41.771682 kernel: Dynamic Preempt: voluntary Mar 7 01:46:41.771693 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:46:41.771704 kernel: rcu: RCU event tracing is enabled. Mar 7 01:46:41.771714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:46:41.771725 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:46:41.771740 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:46:41.771751 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:46:41.771762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:46:41.771772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:46:41.771782 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:46:41.771793 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:46:41.771803 kernel: Console: colour VGA+ 80x25 Mar 7 01:46:41.771813 kernel: printk: console [ttyS0] enabled Mar 7 01:46:41.771823 kernel: ACPI: Core revision 20230628 Mar 7 01:46:41.771837 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:46:41.771932 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:46:41.771943 kernel: x2apic enabled Mar 7 01:46:41.771954 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:46:41.771964 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:46:41.771975 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:46:41.771985 kernel: kvm-guest: setup PV IPIs Mar 7 01:46:41.771996 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:46:41.772022 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:46:41.772033 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:46:41.772044 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:46:41.772103 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:46:41.772114 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:46:41.772125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:46:41.772135 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:46:41.772146 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:46:41.772161 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:46:41.772171 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:46:41.772183 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:46:41.772194 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:46:41.772204 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:46:41.772215 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:46:41.772226 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:46:41.772236 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:46:41.772251 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:46:41.772261 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:46:41.772272 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:46:41.772283 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:46:41.772294 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:46:41.772305 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:46:41.772317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:46:41.772328 kernel: landlock: Up and running. Mar 7 01:46:41.772338 kernel: SELinux: Initializing. Mar 7 01:46:41.772353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:46:41.772364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:46:41.772374 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:46:41.772385 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:41.772396 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:41.772407 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:41.772418 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:46:41.772428 kernel: signal: max sigframe size: 1776 Mar 7 01:46:41.772439 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:46:41.772454 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:46:41.772465 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:46:41.772475 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:46:41.772486 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:46:41.772496 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:46:41.772507 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:46:41.772518 kernel: smpboot: Max logical packages: 1 Mar 7 01:46:41.772529 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:46:41.772539 kernel: devtmpfs: initialized Mar 7 01:46:41.772554 kernel: x86/mm: Memory block size: 128MB Mar 7 01:46:41.772566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:46:41.772576 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:46:41.772589 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:46:41.772603 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:46:41.772614 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:46:41.772628 kernel: audit: type=2000 audit(1772847992.639:1): state=initialized audit_enabled=0 res=1 Mar 7 01:46:41.772640 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:46:41.772651 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:46:41.772667 kernel: cpuidle: using governor menu Mar 7 01:46:41.772678 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:46:41.772689 kernel: dca service started, version 1.12.1 Mar 7 01:46:41.772701 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:46:41.772713 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:46:41.772724 kernel: PCI: Using configuration type 1 for base access Mar 7 01:46:41.772735 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:46:41.772747 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:46:41.772759 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:46:41.772778 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:46:41.772789 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:46:41.772799 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:46:41.772811 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:46:41.772822 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:46:41.772833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:46:41.772938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:46:41.772952 kernel: ACPI: Interpreter enabled Mar 7 01:46:41.772964 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:46:41.772980 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:46:41.772993 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:46:41.773005 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:46:41.773016 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:46:41.773027 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:46:41.811345 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:46:41.812284 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:46:41.812597 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:46:41.812636 kernel: PCI host bridge to bus 0000:00 Mar 7 01:46:41.813652 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:46:41.813969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:46:41.816319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:46:41.816572 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:46:41.816763 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:46:41.820005 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:46:41.822620 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:46:41.824752 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:46:41.825116 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:46:41.825402 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:46:41.825689 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:46:41.826021 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:46:41.831314 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:46:41.831607 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:46:41.831808 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:46:41.834674 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:46:41.837275 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:46:41.837590 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:46:41.837823 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:46:41.838565 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:46:41.838795 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:46:41.841657 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:46:41.842131 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:46:41.842421 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:46:41.842656 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:46:41.842996 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:46:41.850510 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:46:41.850753 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:46:41.851153 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 16601 usecs Mar 7 01:46:41.851606 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:46:41.851987 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:46:41.854787 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:46:41.855304 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:46:41.855551 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:46:41.855571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:46:41.855583 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:46:41.855596 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:46:41.855607 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:46:41.855619 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:46:41.855630 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:46:41.855641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:46:41.855660 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:46:41.855672 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:46:41.855683 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:46:41.855695 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:46:41.855706 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:46:41.855718 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:46:41.855730 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:46:41.855741 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:46:41.855752 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:46:41.855767 kernel: iommu: Default domain type: Translated Mar 7 01:46:41.855778 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:46:41.855789 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:46:41.855800 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:46:41.855810 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:46:41.855823 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:46:41.856324 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:46:41.856541 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:46:41.856765 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:46:41.856784 kernel: vgaarb: loaded Mar 7 01:46:41.856796 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:46:41.856808 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:46:41.856820 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:46:41.856831 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:46:41.856942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:46:41.856960 kernel: pnp: PnP ACPI init Mar 7 01:46:41.857401 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:46:41.857432 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:46:41.857445 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:46:41.857457 kernel: NET: Registered PF_INET protocol family Mar 7 01:46:41.857468 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:46:41.857481 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:46:41.857492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:46:41.857504 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:46:41.857516 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:46:41.857533 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:46:41.857545 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:46:41.857556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:46:41.857568 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:46:41.857579 kernel: NET: Registered PF_XDP protocol family Mar 7 01:46:41.857936 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:46:41.858394 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:46:41.858651 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:46:41.858962 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:46:41.859245 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:46:41.859453 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:46:41.859474 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:46:41.859487 kernel: Initialise system trusted keyrings Mar 7 01:46:41.859500 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:46:41.859511 kernel: Key type asymmetric registered Mar 7 01:46:41.859524 kernel: Asymmetric key parser 'x509' registered Mar 7 01:46:41.859535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:46:41.859548 kernel: io scheduler mq-deadline registered Mar 7 01:46:41.859568 kernel: io scheduler kyber registered Mar 7 01:46:41.859580 kernel: io scheduler bfq registered Mar 7 01:46:41.859593 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:46:41.859604 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:46:41.859618 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:46:41.859629 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:46:41.859642 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:46:41.859653 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:46:41.859666 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:46:41.859685 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:46:41.859696 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:46:41.860027 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:46:41.860115 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:46:41.860339 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:46:41.860609 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:46:39 UTC (1772847999) Mar 7 01:46:41.860827 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:46:41.860952 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:46:41.860967 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:46:41.860981 kernel: Segment Routing with IPv6 Mar 7 01:46:41.860991 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:46:41.861004 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:46:41.861015 kernel: Key type dns_resolver registered Mar 7 01:46:41.861027 kernel: IPI shorthand broadcast: enabled Mar 7 01:46:41.861039 kernel: sched_clock: Marking stable (4752029495, 1779312363)->(8213881800, -1682539942) Mar 7 01:46:41.861103 kernel: registered taskstats version 1 Mar 7 01:46:41.861118 kernel: Loading compiled-in X.509 certificates Mar 7 01:46:41.861138 kernel: hrtimer: interrupt took 18197424 ns Mar 7 01:46:41.861148 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:46:41.861159 kernel: Key type .fscrypt registered Mar 7 01:46:41.861172 kernel: Key type fscrypt-provisioning registered Mar 7 01:46:41.861184 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:46:41.861196 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:46:41.861208 kernel: ima: No architecture policies found Mar 7 01:46:41.861220 kernel: clk: Disabling unused clocks Mar 7 01:46:41.861238 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:46:41.861251 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:46:41.861263 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:46:41.861276 kernel: Run /init as init process Mar 7 01:46:41.861286 kernel: with arguments: Mar 7 01:46:41.861299 kernel: /init Mar 7 01:46:41.861311 kernel: with environment: Mar 7 01:46:41.861323 kernel: HOME=/ Mar 7 01:46:41.861333 kernel: TERM=linux Mar 7 01:46:41.861348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:46:41.861370 systemd[1]: Detected virtualization kvm. Mar 7 01:46:41.861381 systemd[1]: Detected architecture x86-64. Mar 7 01:46:41.861394 systemd[1]: Running in initrd. Mar 7 01:46:41.861407 systemd[1]: No hostname configured, using default hostname. Mar 7 01:46:41.861420 systemd[1]: Hostname set to . Mar 7 01:46:41.861432 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:46:41.861446 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:46:41.861465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:46:41.861477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:46:41.861491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:46:41.861505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:46:41.861517 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:46:41.861530 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:46:41.861545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:46:41.861563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:46:41.861576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:46:41.861589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:46:41.861603 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:46:41.861640 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:46:41.861662 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:46:41.861676 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:46:41.861688 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:46:41.861702 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:46:41.861714 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:46:41.861728 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:46:41.861741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:46:41.861754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:46:41.861767 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:46:41.861785 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:46:41.861799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:46:41.861811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:46:41.861825 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:46:41.861839 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:46:41.862114 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:46:41.862134 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:46:41.862271 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:46:41.862312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:41.862326 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:46:41.862340 systemd-journald[194]: Journal started Mar 7 01:46:41.862369 systemd-journald[194]: Runtime Journal (/run/log/journal/462933c7779d4ad389e97c4893c0aa2b) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:46:41.862436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:46:41.909942 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:46:41.916194 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:46:42.102288 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:46:42.851203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:46:42.851418 kernel: Bridge firewalling registered Mar 7 01:46:42.107160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:46:42.362094 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:46:42.872452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:46:42.928704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:46:42.955709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:42.989367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:46:43.025255 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:46:43.123586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:43.193390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:46:43.248782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:46:43.317042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:46:43.318789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:46:43.354472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:46:43.377539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:43.427025 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:46:43.573470 dracut-cmdline[230]: dracut-dracut-053 Mar 7 01:46:43.603203 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:43.812942 systemd-resolved[228]: Positive Trust Anchors: Mar 7 01:46:43.812964 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:46:43.813009 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:46:43.818235 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 7 01:46:43.823192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:46:43.976651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:46:44.212564 kernel: SCSI subsystem initialized Mar 7 01:46:44.272946 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:46:44.378302 kernel: iscsi: registered transport (tcp) Mar 7 01:46:44.490530 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:46:44.490627 kernel: QLogic iSCSI HBA Driver Mar 7 01:46:44.749716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:46:44.856323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:46:45.329312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:46:45.329557 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:46:45.329582 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:46:45.645617 kernel: raid6: avx2x4 gen() 16417 MB/s Mar 7 01:46:45.662487 kernel: raid6: avx2x2 gen() 19526 MB/s Mar 7 01:46:45.696041 kernel: raid6: avx2x1 gen() 8348 MB/s Mar 7 01:46:45.696180 kernel: raid6: using algorithm avx2x2 gen() 19526 MB/s Mar 7 01:46:45.734290 kernel: raid6: .... xor() 11358 MB/s, rmw enabled Mar 7 01:46:45.734367 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:46:45.805277 kernel: xor: automatically using best checksumming function avx Mar 7 01:46:46.589348 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:46:46.635275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:46:46.704389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:46:46.750612 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 7 01:46:46.766054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:46:46.813756 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:46:46.887216 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 7 01:46:47.131589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:46:47.185343 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:46:47.388473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:46:47.438942 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:46:47.492581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:46:47.507474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:46:47.525026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:46:47.573665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:46:47.701126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:46:47.739582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:46:47.741553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:47.821588 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:46:47.761840 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:47.795594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:46:47.801593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:47.980483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:48.103815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:48.130041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:46:48.203685 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:46:48.239932 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:46:48.252544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:46:48.252604 kernel: GPT:9289727 != 19775487 Mar 7 01:46:48.252622 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:46:48.252653 kernel: GPT:9289727 != 19775487 Mar 7 01:46:48.252669 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:46:48.252685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:48.569449 kernel: libata version 3.00 loaded. Mar 7 01:46:48.736331 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Mar 7 01:46:48.790803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:46:49.168292 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Mar 7 01:46:49.168340 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:46:49.168371 kernel: AES CTR mode by8 optimization enabled Mar 7 01:46:49.168388 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:46:49.168788 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:46:49.168813 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:46:49.169308 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:46:49.173555 kernel: scsi host0: ahci Mar 7 01:46:49.173813 kernel: scsi host1: ahci Mar 7 01:46:49.174420 kernel: scsi host2: ahci Mar 7 01:46:49.174664 kernel: scsi host3: ahci Mar 7 01:46:49.177714 kernel: scsi host4: ahci Mar 7 01:46:49.178150 kernel: scsi host5: ahci Mar 7 01:46:49.178402 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:46:49.178422 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:46:49.178443 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:46:49.178459 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:46:49.178475 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:46:49.178491 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:46:49.124288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:46:49.267811 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:49.267964 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:49.267988 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:49.268006 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:49.268023 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:49.268039 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:46:49.185999 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:46:49.301711 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:46:49.301744 kernel: ata3.00: applying bridge limits Mar 7 01:46:49.301779 kernel: ata3.00: configured for UDMA/100 Mar 7 01:46:49.206307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:49.336328 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:46:49.311603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:46:49.348716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:46:49.420700 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:46:49.442634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:49.478219 disk-uuid[552]: Primary Header is updated. Mar 7 01:46:49.478219 disk-uuid[552]: Secondary Entries is updated. Mar 7 01:46:49.478219 disk-uuid[552]: Secondary Header is updated. Mar 7 01:46:49.514745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:49.537365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:49.561969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:49.599970 kernel: block device autoloading is deprecated and will be removed. Mar 7 01:46:49.618295 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:46:49.618784 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:46:49.622410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:49.657517 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:46:50.556973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:50.568628 disk-uuid[554]: The operation has completed successfully. Mar 7 01:46:50.714670 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:46:50.715012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:46:50.759503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:46:50.782963 sh[594]: Success Mar 7 01:46:50.861595 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:46:51.039994 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:46:51.060793 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:46:51.081362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:46:51.121419 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:46:51.121579 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:51.132451 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:46:51.132501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:46:51.135259 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:46:51.180573 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:46:51.191763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:46:51.222570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:46:51.239010 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:46:51.289032 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:51.289146 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:51.289168 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:51.326066 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:51.353055 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:46:51.368991 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:51.381056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:46:51.399528 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:46:51.595557 ignition[701]: Ignition 2.19.0 Mar 7 01:46:51.598944 ignition[701]: Stage: fetch-offline Mar 7 01:46:51.599028 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:51.599046 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:51.599311 ignition[701]: parsed url from cmdline: "" Mar 7 01:46:51.599319 ignition[701]: no config URL provided Mar 7 01:46:51.599329 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:46:51.599346 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:46:51.600344 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 7 01:46:51.600354 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:46:51.642600 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 7 01:46:51.714151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:46:51.758638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:46:51.831593 ignition[701]: parsing config with SHA512: c18f3bf01affa14b54e33c4ea9078bf9cb03407b369de0827994fb520a618af31fbc3d95c32cb74b096cbf900a880ce0d0dfe2954e9408bc8143d9810fb42dd1 Mar 7 01:46:51.836720 systemd-networkd[783]: lo: Link UP Mar 7 01:46:51.836734 systemd-networkd[783]: lo: Gained carrier Mar 7 01:46:51.839306 ignition[701]: fetch-offline: fetch-offline passed Mar 7 01:46:51.838309 unknown[701]: fetched base config from "system" Mar 7 01:46:51.839437 ignition[701]: Ignition finished successfully Mar 7 01:46:51.838328 unknown[701]: fetched user config from "qemu" Mar 7 01:46:51.842438 systemd-networkd[783]: Enumeration completed Mar 7 01:46:51.842656 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:46:51.846213 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:46:51.846219 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:46:51.848259 systemd[1]: Reached target network.target - Network. Mar 7 01:46:51.856451 systemd-networkd[783]: eth0: Link UP Mar 7 01:46:51.856459 systemd-networkd[783]: eth0: Gained carrier Mar 7 01:46:51.856474 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:46:51.921702 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:46:51.971389 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:46:51.972013 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:46:52.017229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:46:52.068422 ignition[787]: Ignition 2.19.0 Mar 7 01:46:52.068470 ignition[787]: Stage: kargs Mar 7 01:46:52.068664 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:52.068678 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:52.090314 ignition[787]: kargs: kargs passed Mar 7 01:46:52.090442 ignition[787]: Ignition finished successfully Mar 7 01:46:52.105343 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:46:52.144423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:46:52.206014 ignition[795]: Ignition 2.19.0 Mar 7 01:46:52.206033 ignition[795]: Stage: disks Mar 7 01:46:52.206431 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:52.206452 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:52.208342 ignition[795]: disks: disks passed Mar 7 01:46:52.208420 ignition[795]: Ignition finished successfully Mar 7 01:46:52.237309 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:46:52.258586 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:46:52.265818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:46:52.287365 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:46:52.302371 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:46:52.310415 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:46:52.344258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:46:52.403588 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:46:52.418694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:46:52.460345 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:46:52.944528 systemd-networkd[783]: eth0: Gained IPv6LL Mar 7 01:46:53.148552 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:46:53.151209 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:46:53.163610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:46:53.190745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:46:53.223367 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:46:53.295726 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 7 01:46:53.295763 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:53.295784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:53.295801 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:53.287504 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:46:53.330307 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:53.287591 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:46:53.287636 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:46:53.349389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:46:53.377460 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:46:53.429800 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:46:53.712967 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:46:53.767259 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:46:53.804933 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:46:53.845999 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:46:54.294812 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:46:54.338307 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:46:54.358986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:46:54.405254 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:46:54.441075 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:54.574596 ignition[929]: INFO : Ignition 2.19.0 Mar 7 01:46:54.574596 ignition[929]: INFO : Stage: mount Mar 7 01:46:54.603831 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:54.603831 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:54.603831 ignition[929]: INFO : mount: mount passed Mar 7 01:46:54.603831 ignition[929]: INFO : Ignition finished successfully Mar 7 01:46:54.591713 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:46:54.679406 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:46:54.705598 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:46:54.768969 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:46:54.821419 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 7 01:46:54.844306 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:54.844394 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:54.858398 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:54.895576 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:54.904809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:46:55.047804 ignition[960]: INFO : Ignition 2.19.0 Mar 7 01:46:55.047804 ignition[960]: INFO : Stage: files Mar 7 01:46:55.047804 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:55.047804 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:55.128431 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:46:55.128431 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:46:55.128431 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:46:55.128431 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:46:55.128431 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:46:55.128431 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:46:55.128431 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:46:55.128431 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:46:55.096254 unknown[960]: wrote ssh authorized keys file for user: core Mar 7 01:46:55.300984 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:46:56.044773 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:46:56.291273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:46:56.291273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:46:56.291273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:46:56.465130 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:46:57.524613 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:46:57.524613 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:46:57.559586 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:46:57.807638 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:46:57.830533 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:46:57.842786 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:46:57.842786 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:46:57.842786 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:46:57.842786 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:46:57.842786 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:46:57.842786 ignition[960]: INFO : files: files passed Mar 7 01:46:57.842786 ignition[960]: INFO : Ignition finished successfully Mar 7 01:46:57.939665 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:46:57.996474 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:46:58.032068 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:46:58.108247 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:46:58.111028 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:46:58.169653 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:46:58.186973 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:58.186973 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:58.226153 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:58.205699 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:46:58.230590 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:46:58.277620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:46:58.424056 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:46:58.428605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:46:58.483819 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:46:58.503368 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:46:58.524747 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:46:58.611584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:46:58.688004 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:46:58.768635 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:46:58.847437 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:46:58.872820 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:46:58.942808 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:46:58.963590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:46:58.963787 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:46:58.994665 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:46:59.018988 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:46:59.064196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:46:59.100405 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:46:59.100804 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:46:59.114735 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:46:59.134756 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:46:59.155211 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:46:59.167294 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:46:59.183717 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:46:59.191717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:46:59.192167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:46:59.226235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:46:59.239228 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:46:59.243402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:46:59.245759 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:46:59.252679 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:46:59.252826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:46:59.268972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:46:59.269227 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:46:59.280517 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:46:59.299022 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:46:59.306331 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:46:59.369722 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:46:59.370183 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:46:59.370442 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:46:59.372350 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:46:59.543612 ignition[1014]: INFO : Ignition 2.19.0 Mar 7 01:46:59.543612 ignition[1014]: INFO : Stage: umount Mar 7 01:46:59.543612 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:59.543612 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:59.543612 ignition[1014]: INFO : umount: umount passed Mar 7 01:46:59.543612 ignition[1014]: INFO : Ignition finished successfully Mar 7 01:46:59.379262 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:46:59.379482 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:46:59.379757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:46:59.380016 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:46:59.380293 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:46:59.380452 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:46:59.443297 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:46:59.475581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:46:59.511015 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:46:59.511358 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:46:59.511598 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:46:59.511768 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:46:59.529441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:46:59.530717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:46:59.587835 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:46:59.588337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:46:59.779718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:46:59.787821 systemd[1]: Stopped target network.target - Network. Mar 7 01:46:59.828296 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:46:59.828438 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:46:59.843301 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:46:59.843420 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:46:59.854748 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:46:59.854963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:46:59.855081 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:46:59.855218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:46:59.855622 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:46:59.855808 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:47:00.039364 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:47:00.042507 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:47:00.079385 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:47:00.079764 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:47:00.080066 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 7 01:47:00.132258 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:47:00.132520 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:47:00.156371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:47:00.156486 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:47:00.161220 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:47:00.161332 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:47:00.255166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:47:00.321066 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:47:00.321447 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:47:00.381094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:47:00.381749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:47:00.423420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:47:00.423643 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:47:00.434060 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:47:00.434266 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:47:00.434629 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:47:00.614645 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:47:00.615053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:47:00.645421 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:47:00.645642 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:47:00.742083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:47:00.745432 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:47:00.821312 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:47:00.821473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:47:00.869405 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:47:00.870057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:47:01.072349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:47:01.072476 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:47:01.105401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:47:01.105528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:47:01.221562 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:47:01.260475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:47:01.260579 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:47:01.342528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:47:01.342645 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:47:01.362558 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:47:01.362681 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:47:01.442499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:47:01.442832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:47:01.492649 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:47:01.493060 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:47:01.513749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:47:01.538046 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:47:01.565392 systemd[1]: Switching root. Mar 7 01:47:01.626981 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:47:01.627166 systemd-journald[194]: Journal stopped Mar 7 01:47:05.823618 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:47:05.823731 kernel: SELinux: policy capability open_perms=1 Mar 7 01:47:05.823753 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:47:05.823784 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:47:05.823802 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:47:05.823827 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:47:05.823948 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:47:05.823974 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:47:05.823992 kernel: audit: type=1403 audit(1772848022.155:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:47:05.824012 systemd[1]: Successfully loaded SELinux policy in 228.884ms. Mar 7 01:47:05.824045 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 44.010ms. Mar 7 01:47:05.824069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:47:05.824087 systemd[1]: Detected virtualization kvm. Mar 7 01:47:05.824105 systemd[1]: Detected architecture x86-64. Mar 7 01:47:05.825197 systemd[1]: Detected first boot. Mar 7 01:47:05.825224 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:47:05.825244 zram_generator::config[1058]: No configuration found. Mar 7 01:47:05.825265 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:47:05.825281 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:47:05.825299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:47:05.825321 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:47:05.825339 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:47:05.825366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:47:05.825385 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:47:05.825402 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:47:05.825427 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:47:05.825444 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:47:05.825461 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:47:05.825480 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:47:05.825497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:47:05.825518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:47:05.825539 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:47:05.825562 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:47:05.825579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:47:05.825596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:47:05.825616 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:47:05.825635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:47:05.825652 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:47:05.825672 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:47:05.825689 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:47:05.825716 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:47:05.825738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:47:05.825755 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:47:05.825774 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:47:05.825793 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:47:05.825812 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:47:05.825829 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:47:05.826208 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:47:05.826242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:47:05.826263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:47:05.826282 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:47:05.826303 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:47:05.826320 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:47:05.826338 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:47:05.826358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:05.826377 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:47:05.826399 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:47:05.826419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:47:05.826440 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:47:05.826457 systemd[1]: Reached target machines.target - Containers. Mar 7 01:47:05.826477 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:47:05.826496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:47:05.826515 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:47:05.826532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:47:05.826549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:47:05.826571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:47:05.826587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:47:05.826603 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:47:05.826624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:47:05.826640 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:47:05.826662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:47:05.826681 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:47:05.826698 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:47:05.826723 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:47:05.826739 kernel: fuse: init (API version 7.39) Mar 7 01:47:05.826759 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:47:05.826776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:47:05.826794 kernel: loop: module loaded Mar 7 01:47:05.827193 systemd-journald[1142]: Collecting audit messages is disabled. Mar 7 01:47:05.827248 systemd-journald[1142]: Journal started Mar 7 01:47:05.827287 systemd-journald[1142]: Runtime Journal (/run/log/journal/462933c7779d4ad389e97c4893c0aa2b) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:47:04.160593 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:47:04.212812 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:47:04.214420 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:47:04.215696 systemd[1]: systemd-journald.service: Consumed 2.355s CPU time. Mar 7 01:47:05.851460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:47:05.873277 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:47:05.910018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:47:05.938006 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:47:05.938093 systemd[1]: Stopped verity-setup.service. Mar 7 01:47:05.997449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:06.011100 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:47:06.024221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:47:06.036416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:47:06.047743 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:47:06.061234 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:47:06.075509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:47:06.087428 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:47:06.099060 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:47:06.107776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:47:06.119586 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:47:06.122195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:47:06.136777 kernel: ACPI: bus type drm_connector registered Mar 7 01:47:06.138516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:47:06.138836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:47:06.151359 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:47:06.151683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:47:06.165609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:47:06.166283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:47:06.175770 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:47:06.176425 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:47:06.187547 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:47:06.188211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:47:06.197547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:47:06.206532 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:47:06.217082 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:47:06.243188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:47:06.282643 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:47:06.317596 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:47:06.339702 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:47:06.367306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:47:06.367386 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:47:06.386472 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:47:06.434006 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:47:06.473061 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:47:06.481599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:47:06.501007 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:47:06.527502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:47:06.548517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:47:06.558349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:47:06.568652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:47:06.573277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:47:06.587569 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:47:06.588274 systemd-journald[1142]: Time spent on flushing to /var/log/journal/462933c7779d4ad389e97c4893c0aa2b is 35.505ms for 947 entries. Mar 7 01:47:06.588274 systemd-journald[1142]: System Journal (/var/log/journal/462933c7779d4ad389e97c4893c0aa2b) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:47:06.657022 systemd-journald[1142]: Received client request to flush runtime journal. Mar 7 01:47:06.619350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:47:06.661610 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:47:06.690774 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:47:06.708429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:47:06.762262 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:47:06.734630 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:47:06.779830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:47:06.801713 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:47:06.821380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:47:06.861774 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:47:06.867682 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 7 01:47:06.867762 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 7 01:47:06.894456 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:47:06.916761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:47:06.937031 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:47:06.967333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:47:07.006828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:47:07.057975 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:47:07.060509 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:47:07.111064 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:47:07.160065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:47:07.208434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:47:07.316405 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:47:07.327595 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 7 01:47:07.329406 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 7 01:47:07.347775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:47:07.425502 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:47:07.481192 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:47:07.526998 kernel: loop5: detected capacity change from 0 to 228704 Mar 7 01:47:07.599651 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:47:07.600769 (sd-merge)[1199]: Merged extensions into '/usr'. Mar 7 01:47:07.616430 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:47:07.616750 systemd[1]: Reloading... Mar 7 01:47:07.719288 zram_generator::config[1222]: No configuration found. Mar 7 01:47:07.954813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:47:08.043757 systemd[1]: Reloading finished in 425 ms. Mar 7 01:47:08.063550 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:47:08.109002 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:47:08.128428 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:47:08.145214 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:47:08.209191 systemd[1]: Starting ensure-sysext.service... Mar 7 01:47:08.221030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:47:08.254263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:47:08.314613 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:47:08.314677 systemd[1]: Reloading... Mar 7 01:47:08.352444 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:47:08.353248 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:47:08.355085 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:47:08.355586 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 7 01:47:08.355709 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 7 01:47:08.370352 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:47:08.370372 systemd-tmpfiles[1264]: Skipping /boot Mar 7 01:47:08.383832 systemd-udevd[1265]: Using default interface naming scheme 'v255'. Mar 7 01:47:08.414709 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:47:08.414733 systemd-tmpfiles[1264]: Skipping /boot Mar 7 01:47:08.575780 zram_generator::config[1309]: No configuration found. Mar 7 01:47:08.682212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1311) Mar 7 01:47:08.906047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:47:08.924013 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:47:08.965296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:47:09.024060 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:47:09.024544 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:47:09.035751 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:47:09.114584 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:47:09.158109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:47:09.185518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:47:09.185732 systemd[1]: Reloading finished in 870 ms. Mar 7 01:47:09.224983 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:47:09.238614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:47:09.345416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:47:09.491504 systemd[1]: Finished ensure-sysext.service. Mar 7 01:47:09.558991 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:09.701808 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:47:09.727668 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:47:09.749487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:47:09.759439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:47:09.784340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:47:09.811453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:47:09.822236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:47:09.835382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:47:09.840484 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:47:09.864419 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:47:09.886202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:47:09.917608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:47:09.937034 augenrules[1387]: No rules Mar 7 01:47:09.942331 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:47:09.957416 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:47:09.980608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:47:10.001513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:10.007661 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:47:10.020556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:47:10.021194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:47:10.033506 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:47:10.034205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:47:10.045689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:47:10.046701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:47:10.057286 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:47:10.058211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:47:10.073218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:47:10.088265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:47:10.125366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:47:10.125614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:47:10.253835 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:47:10.422679 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:47:10.743835 kernel: kvm_amd: TSC scaling supported Mar 7 01:47:10.744051 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:47:10.744075 kernel: kvm_amd: Nested Paging enabled Mar 7 01:47:10.744102 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:47:10.744187 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:47:11.001665 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:47:11.294559 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:47:11.414705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:47:11.519484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:47:11.812052 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:47:13.260660 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:47:15.078778 systemd-resolved[1386]: Positive Trust Anchors: Mar 7 01:47:15.079305 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:47:15.079355 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:47:15.114728 systemd-resolved[1386]: Defaulting to hostname 'linux'. Mar 7 01:47:15.126022 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:47:15.136800 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:47:15.146202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:47:15.153965 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:47:15.170748 systemd-networkd[1384]: lo: Link UP Mar 7 01:47:15.170762 systemd-networkd[1384]: lo: Gained carrier Mar 7 01:47:15.201802 systemd-networkd[1384]: Enumeration completed Mar 7 01:47:15.202485 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:47:15.207979 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:47:15.208045 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:47:15.212059 systemd[1]: Reached target network.target - Network. Mar 7 01:47:15.218973 systemd-networkd[1384]: eth0: Link UP Mar 7 01:47:15.218984 systemd-networkd[1384]: eth0: Gained carrier Mar 7 01:47:15.219012 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:47:15.251807 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:47:15.321267 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:47:15.324381 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Mar 7 01:47:16.049888 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:47:16.051249 systemd-resolved[1386]: Clock change detected. Flushing caches. Mar 7 01:47:16.054388 systemd-timesyncd[1390]: Initial clock synchronization to Sat 2026-03-07 01:47:16.047789 UTC. Mar 7 01:47:16.131210 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:47:16.174245 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:47:16.248889 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:47:16.567270 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:47:16.748557 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:47:16.781713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:47:16.826939 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:47:16.843168 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:47:16.871725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:47:16.889908 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:47:16.925256 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:47:16.955162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:47:16.977485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:47:16.977617 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:47:17.017808 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:47:17.037408 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:47:17.061457 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:47:17.117926 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:47:17.156818 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:47:17.180808 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:47:17.208963 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:47:17.220817 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:47:17.231488 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:47:17.231600 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:47:17.251459 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:47:17.263578 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:47:17.326640 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:47:17.375699 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:47:17.424889 systemd-networkd[1384]: eth0: Gained IPv6LL Mar 7 01:47:17.425249 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:47:17.446474 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:47:17.466211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:47:17.475917 jq[1432]: false Mar 7 01:47:17.503240 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:47:17.538420 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:47:17.555642 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:47:17.582230 extend-filesystems[1433]: Found loop3 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found loop4 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found loop5 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found sr0 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda1 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda2 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda3 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found usr Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda4 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda6 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda7 Mar 7 01:47:17.582230 extend-filesystems[1433]: Found vda9 Mar 7 01:47:17.582230 extend-filesystems[1433]: Checking size of /dev/vda9 Mar 7 01:47:17.965430 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1320) Mar 7 01:47:17.965760 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:47:17.652803 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:47:17.685486 dbus-daemon[1431]: [system] SELinux support is enabled Mar 7 01:47:17.982722 extend-filesystems[1433]: Resized partition /dev/vda9 Mar 7 01:47:17.688858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:47:18.014784 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:47:17.692644 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:47:18.051537 update_engine[1449]: I20260307 01:47:17.894486 1449 main.cc:92] Flatcar Update Engine starting Mar 7 01:47:18.051537 update_engine[1449]: I20260307 01:47:17.901799 1449 update_check_scheduler.cc:74] Next update check in 10m14s Mar 7 01:47:17.750644 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:47:17.772866 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:47:18.052612 jq[1453]: true Mar 7 01:47:17.835890 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:47:17.856382 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:47:17.941797 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:47:18.053111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:47:18.053826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:47:18.060121 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:47:18.081431 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:47:18.081454 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:47:18.101868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:47:18.107636 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:47:18.195710 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:47:18.195710 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:47:18.195710 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:47:18.241746 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:47:18.262632 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Mar 7 01:47:18.242245 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:47:18.283547 jq[1458]: true Mar 7 01:47:18.275665 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:47:18.275777 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:47:18.282864 systemd-logind[1442]: New seat seat0. Mar 7 01:47:18.289562 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:47:18.380414 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:47:18.382960 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:47:18.506410 tar[1457]: linux-amd64/LICENSE Mar 7 01:47:18.505352 dbus-daemon[1431]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:47:18.508856 tar[1457]: linux-amd64/helm Mar 7 01:47:18.532935 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:47:18.554798 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:47:18.709876 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:47:18.753513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:18.824899 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:47:18.866645 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:47:18.867176 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:47:18.916203 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:47:18.930820 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:47:19.059950 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:47:19.130405 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:47:19.165365 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:47:19.166506 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:47:19.336491 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:47:19.365757 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:47:19.377802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:47:19.633270 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:47:19.633885 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:47:19.669248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:47:19.723760 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:47:19.725591 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:47:19.841887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:47:19.945661 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:47:20.156802 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:47:20.448071 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:47:20.786493 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:47:20.852474 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:47:21.224475 containerd[1460]: time="2026-03-07T01:47:21.223542264Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:47:21.618699 containerd[1460]: time="2026-03-07T01:47:21.618604494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.667568507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.667639880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.667665197Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.669389005Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.669425914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.669551839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:21.671624 containerd[1460]: time="2026-03-07T01:47:21.669576766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.674582 containerd[1460]: time="2026-03-07T01:47:21.673625650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:21.674582 containerd[1460]: time="2026-03-07T01:47:21.673730947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.674582 containerd[1460]: time="2026-03-07T01:47:21.673760342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:21.674582 containerd[1460]: time="2026-03-07T01:47:21.673782112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.674582 containerd[1460]: time="2026-03-07T01:47:21.673964352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.684178 containerd[1460]: time="2026-03-07T01:47:21.674938120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:21.684178 containerd[1460]: time="2026-03-07T01:47:21.675412356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:21.684178 containerd[1460]: time="2026-03-07T01:47:21.675439967Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:47:21.684178 containerd[1460]: time="2026-03-07T01:47:21.675591830Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:47:21.684178 containerd[1460]: time="2026-03-07T01:47:21.675663775Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.796503251Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.797254544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.812874155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.812915011Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.812935790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.815159241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.820131915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822189035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822218561Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822241523Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822265137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822291687Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822395541Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.798214 containerd[1460]: time="2026-03-07T01:47:21.822425286Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822447458Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822473166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822493504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822510956Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822538678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822557894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822576128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822594773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822612576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822631802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822651128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822670534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822702093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.838856 containerd[1460]: time="2026-03-07T01:47:21.822726950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822745564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822772635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822791340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822833859Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822867853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822885396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822901405Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.822977587Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823190405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823213027Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823229508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823245447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823265435Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:47:21.839516 containerd[1460]: time="2026-03-07T01:47:21.823279481Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:47:21.839873 containerd[1460]: time="2026-03-07T01:47:21.823374238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.832404667Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.832590394Z" level=info msg="Connect containerd service" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.832672527Z" level=info msg="using legacy CRI server" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.832689008Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.832912866Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.834637105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.835185409Z" level=info msg="Start subscribing containerd event" Mar 7 01:47:21.839907 containerd[1460]: time="2026-03-07T01:47:21.835242716Z" level=info msg="Start recovering state" Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.843539896Z" level=info msg="Start event monitor" Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.843621549Z" level=info msg="Start snapshots syncer" Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.843642898Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.843653809Z" level=info msg="Start streaming server" Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.853704963Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:47:21.870707 containerd[1460]: time="2026-03-07T01:47:21.854251464Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:47:21.872727 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:47:21.933151 containerd[1460]: time="2026-03-07T01:47:21.873667823Z" level=info msg="containerd successfully booted in 0.718057s" Mar 7 01:47:24.167162 tar[1457]: linux-amd64/README.md Mar 7 01:47:24.355225 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:47:26.566923 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:47:26.639225 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:51480.service - OpenSSH per-connection server daemon (10.0.0.1:51480). Mar 7 01:47:27.680947 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:47:27.704496 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:27.775263 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:47:27.817154 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:47:27.843914 systemd-logind[1442]: New session 1 of user core. Mar 7 01:47:28.332526 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:47:28.388799 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:47:28.519128 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:47:28.522917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:28.531489 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:47:28.590466 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:29.236849 systemd[1551]: Queued start job for default target default.target. Mar 7 01:47:29.253594 systemd[1551]: Created slice app.slice - User Application Slice. Mar 7 01:47:29.254492 systemd[1551]: Reached target paths.target - Paths. Mar 7 01:47:29.259809 systemd[1551]: Reached target timers.target - Timers. Mar 7 01:47:29.267833 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:47:29.337606 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:47:29.337835 systemd[1551]: Reached target sockets.target - Sockets. Mar 7 01:47:29.337860 systemd[1551]: Reached target basic.target - Basic System. Mar 7 01:47:29.337934 systemd[1551]: Reached target default.target - Main User Target. Mar 7 01:47:29.338115 systemd[1551]: Startup finished in 634ms. Mar 7 01:47:29.338163 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:47:29.366541 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:47:29.369169 systemd[1]: Startup finished in 5.250s (kernel) + 21.720s (initrd) + 26.719s (userspace) = 53.690s. Mar 7 01:47:29.539541 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). Mar 7 01:47:29.670885 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:47:29.674981 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:29.693169 systemd-logind[1442]: New session 2 of user core. Mar 7 01:47:29.712160 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:47:29.859835 sshd[1573]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:29.885387 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:51486.service: Deactivated successfully. Mar 7 01:47:29.888233 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:47:29.897126 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:47:29.914965 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:51500.service - OpenSSH per-connection server daemon (10.0.0.1:51500). Mar 7 01:47:29.917313 systemd-logind[1442]: Removed session 2. Mar 7 01:47:30.072666 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 51500 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:47:30.080470 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:30.111751 systemd-logind[1442]: New session 3 of user core. Mar 7 01:47:30.135576 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:47:30.216763 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:30.257195 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:51500.service: Deactivated successfully. Mar 7 01:47:30.267700 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:47:30.278678 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:47:30.295582 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:38772.service - OpenSSH per-connection server daemon (10.0.0.1:38772). Mar 7 01:47:30.306557 systemd-logind[1442]: Removed session 3. Mar 7 01:47:30.401227 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 38772 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:47:30.409174 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:30.447499 systemd-logind[1442]: New session 4 of user core. Mar 7 01:47:30.484183 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:47:30.630873 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:30.652707 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:38772.service: Deactivated successfully. Mar 7 01:47:30.668954 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:47:30.677861 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:47:30.692551 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:38774.service - OpenSSH per-connection server daemon (10.0.0.1:38774). Mar 7 01:47:30.698135 systemd-logind[1442]: Removed session 4. Mar 7 01:47:30.832948 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 38774 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:47:30.836166 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:30.856890 systemd-logind[1442]: New session 5 of user core. Mar 7 01:47:30.868419 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:47:31.175123 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:47:31.176884 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:47:34.405694 kubelet[1555]: E0307 01:47:34.404786 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:34.423897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:34.424935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:34.428541 systemd[1]: kubelet.service: Consumed 5.520s CPU time. Mar 7 01:47:41.886917 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:47:41.894820 (dockerd)[1616]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:47:44.565860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:47:44.725162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:48.103892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:48.247954 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:48.319266 dockerd[1616]: time="2026-03-07T01:47:48.318452770Z" level=info msg="Starting up" Mar 7 01:47:52.537127 kubelet[1635]: E0307 01:47:52.535831 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:52.571296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:52.574302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:52.580270 systemd[1]: kubelet.service: Consumed 3.941s CPU time. Mar 7 01:47:52.671971 dockerd[1616]: time="2026-03-07T01:47:52.665372351Z" level=info msg="Loading containers: start." Mar 7 01:47:54.194152 kernel: Initializing XFRM netlink socket Mar 7 01:47:55.647374 systemd-networkd[1384]: docker0: Link UP Mar 7 01:47:55.874109 dockerd[1616]: time="2026-03-07T01:47:55.868476066Z" level=info msg="Loading containers: done." Mar 7 01:47:56.241325 dockerd[1616]: time="2026-03-07T01:47:56.238914090Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:47:56.241325 dockerd[1616]: time="2026-03-07T01:47:56.239864263Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:47:56.252290 dockerd[1616]: time="2026-03-07T01:47:56.245724064Z" level=info msg="Daemon has completed initialization" Mar 7 01:47:57.252819 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:47:57.281537 dockerd[1616]: time="2026-03-07T01:47:57.232638027Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:48:02.699648 update_engine[1449]: I20260307 01:48:02.694249 1449 update_attempter.cc:509] Updating boot flags... Mar 7 01:48:02.795748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:48:03.504665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:03.548118 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1787) Mar 7 01:48:06.142498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:06.204564 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:06.443662 containerd[1460]: time="2026-03-07T01:48:06.442661667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:48:08.638911 kubelet[1799]: E0307 01:48:08.638827 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:08.659616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:08.659915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:08.665281 systemd[1]: kubelet.service: Consumed 3.300s CPU time. Mar 7 01:48:10.346802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759239143.mount: Deactivated successfully. Mar 7 01:48:18.905436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:48:18.939929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:21.802565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:21.837977 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:24.029556 kubelet[1877]: E0307 01:48:24.025709 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:24.038594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:24.038904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:24.041682 systemd[1]: kubelet.service: Consumed 2.862s CPU time. Mar 7 01:48:34.282178 containerd[1460]: time="2026-03-07T01:48:34.280455898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:34.287629 containerd[1460]: time="2026-03-07T01:48:34.285394390Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:48:34.308543 containerd[1460]: time="2026-03-07T01:48:34.308396591Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:34.310446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:48:34.326118 containerd[1460]: time="2026-03-07T01:48:34.317930401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 27.875129673s" Mar 7 01:48:34.326118 containerd[1460]: time="2026-03-07T01:48:34.319967390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:34.326118 containerd[1460]: time="2026-03-07T01:48:34.320768782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:48:34.331370 containerd[1460]: time="2026-03-07T01:48:34.328435136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:48:34.360791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:37.468891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:37.829479 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:39.758909 kubelet[1894]: E0307 01:48:39.758249 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:39.781224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:39.783297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:39.785282 systemd[1]: kubelet.service: Consumed 3.499s CPU time. Mar 7 01:48:45.942269 containerd[1460]: time="2026-03-07T01:48:45.941630007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.948923 containerd[1460]: time="2026-03-07T01:48:45.948851356Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:48:45.954213 containerd[1460]: time="2026-03-07T01:48:45.952344715Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.969584 containerd[1460]: time="2026-03-07T01:48:45.969072150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.974129 containerd[1460]: time="2026-03-07T01:48:45.973655658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 11.645169035s" Mar 7 01:48:45.974129 containerd[1460]: time="2026-03-07T01:48:45.973709898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:48:46.001231 containerd[1460]: time="2026-03-07T01:48:45.999461868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:48:49.816116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:48:49.828878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:51.635830 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:51.636338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:52.609887 kubelet[1919]: E0307 01:48:52.609823 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:52.637732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:52.638380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:52.643701 systemd[1]: kubelet.service: Consumed 1.846s CPU time. Mar 7 01:48:58.190718 containerd[1460]: time="2026-03-07T01:48:58.189866161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:58.204791 containerd[1460]: time="2026-03-07T01:48:58.203401432Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:48:58.212628 containerd[1460]: time="2026-03-07T01:48:58.211800432Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:58.251823 containerd[1460]: time="2026-03-07T01:48:58.249873796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:58.267776 containerd[1460]: time="2026-03-07T01:48:58.264698354Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 12.26517463s" Mar 7 01:48:58.267776 containerd[1460]: time="2026-03-07T01:48:58.264814208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:48:58.270592 containerd[1460]: time="2026-03-07T01:48:58.270359409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:49:02.804521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:49:03.024506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:05.684447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:05.709620 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:49:06.120462 kubelet[1938]: E0307 01:49:06.120336 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:49:06.134685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:49:06.135156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:49:06.929366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623913001.mount: Deactivated successfully. Mar 7 01:49:11.872484 containerd[1460]: time="2026-03-07T01:49:11.869412542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:11.878475 containerd[1460]: time="2026-03-07T01:49:11.876582310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:49:11.880371 containerd[1460]: time="2026-03-07T01:49:11.879939319Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:11.898427 containerd[1460]: time="2026-03-07T01:49:11.896831380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:11.907413 containerd[1460]: time="2026-03-07T01:49:11.905580619Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 13.635156916s" Mar 7 01:49:11.907413 containerd[1460]: time="2026-03-07T01:49:11.905938671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:49:11.919458 containerd[1460]: time="2026-03-07T01:49:11.915418963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:49:12.997811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842294057.mount: Deactivated successfully. Mar 7 01:49:16.335938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:49:16.534727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:19.841213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:19.849665 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:49:21.725571 kubelet[2003]: E0307 01:49:21.722806 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:49:21.748165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:49:21.749714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:49:21.754156 systemd[1]: kubelet.service: Consumed 2.370s CPU time. Mar 7 01:49:31.863754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:49:31.914595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:33.345197 containerd[1460]: time="2026-03-07T01:49:33.343697738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:33.364491 containerd[1460]: time="2026-03-07T01:49:33.364141601Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:49:33.382162 containerd[1460]: time="2026-03-07T01:49:33.381766814Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:33.472787 containerd[1460]: time="2026-03-07T01:49:33.472577204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:33.486553 containerd[1460]: time="2026-03-07T01:49:33.485688051Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 21.570205238s" Mar 7 01:49:33.486553 containerd[1460]: time="2026-03-07T01:49:33.486182607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:49:33.519731 containerd[1460]: time="2026-03-07T01:49:33.519289177Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:49:33.610458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:33.626666 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:49:34.957589 kubelet[2030]: E0307 01:49:34.957118 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:49:34.974470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:49:34.974940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:49:34.979532 systemd[1]: kubelet.service: Consumed 1.822s CPU time. Mar 7 01:49:35.409863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366358160.mount: Deactivated successfully. Mar 7 01:49:35.469096 containerd[1460]: time="2026-03-07T01:49:35.467357280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:35.479132 containerd[1460]: time="2026-03-07T01:49:35.475707103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:49:35.507252 containerd[1460]: time="2026-03-07T01:49:35.506661960Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:35.527985 containerd[1460]: time="2026-03-07T01:49:35.527478872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:35.533273 containerd[1460]: time="2026-03-07T01:49:35.531159596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.008283412s" Mar 7 01:49:35.533273 containerd[1460]: time="2026-03-07T01:49:35.531210612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:49:35.554343 containerd[1460]: time="2026-03-07T01:49:35.552873589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:49:36.998614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868355460.mount: Deactivated successfully. Mar 7 01:49:45.047529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 7 01:49:45.071246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:46.046226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:46.076949 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:49:47.755330 kubelet[2103]: E0307 01:49:47.754648 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:49:47.770678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:49:47.771453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:49:47.781453 systemd[1]: kubelet.service: Consumed 1.532s CPU time. Mar 7 01:49:51.339445 containerd[1460]: time="2026-03-07T01:49:51.338957588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:51.359614 containerd[1460]: time="2026-03-07T01:49:51.355134107Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:49:51.378531 containerd[1460]: time="2026-03-07T01:49:51.376876303Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:51.443973 containerd[1460]: time="2026-03-07T01:49:51.443581838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:51.449530 containerd[1460]: time="2026-03-07T01:49:51.448161899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 15.894885019s" Mar 7 01:49:51.449530 containerd[1460]: time="2026-03-07T01:49:51.448210140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:49:57.817515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 7 01:49:57.863784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:59.224286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:59.245134 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:49:59.852378 kubelet[2154]: E0307 01:49:59.852205 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:49:59.862116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:49:59.864198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:50:06.425721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:50:06.455984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:50:06.653289 systemd[1]: Reloading requested from client PID 2171 ('systemctl') (unit session-5.scope)... Mar 7 01:50:06.653578 systemd[1]: Reloading... Mar 7 01:50:07.059131 zram_generator::config[2208]: No configuration found. Mar 7 01:50:07.707813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:50:08.037653 systemd[1]: Reloading finished in 1382 ms. Mar 7 01:50:08.241670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:50:08.248841 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:50:08.250218 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:50:08.253305 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:50:08.255339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:50:08.288711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:50:09.198477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:50:09.202735 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:50:10.751173 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:50:10.751173 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:50:10.751173 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:50:10.751173 kubelet[2261]: I0307 01:50:10.749714 2261 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:50:13.999225 kubelet[2261]: I0307 01:50:13.997488 2261 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:50:13.999225 kubelet[2261]: I0307 01:50:13.997534 2261 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:50:14.002498 kubelet[2261]: I0307 01:50:14.000563 2261 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:50:14.759704 kubelet[2261]: E0307 01:50:14.754698 2261 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:50:14.780564 kubelet[2261]: I0307 01:50:14.777254 2261 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:50:15.021594 kubelet[2261]: E0307 01:50:15.019772 2261 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:50:15.021594 kubelet[2261]: I0307 01:50:15.019834 2261 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:50:15.082600 kubelet[2261]: I0307 01:50:15.082403 2261 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:50:15.084943 kubelet[2261]: I0307 01:50:15.083337 2261 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:50:15.090870 kubelet[2261]: I0307 01:50:15.083465 2261 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:50:15.111135 kubelet[2261]: I0307 01:50:15.098933 2261 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:50:15.111135 kubelet[2261]: I0307 01:50:15.098977 2261 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:50:15.111135 kubelet[2261]: I0307 01:50:15.106489 2261 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:50:15.142751 kubelet[2261]: I0307 01:50:15.141516 2261 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:50:15.142751 kubelet[2261]: I0307 01:50:15.142452 2261 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:50:15.144881 kubelet[2261]: I0307 01:50:15.143580 2261 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:50:15.144881 kubelet[2261]: I0307 01:50:15.143812 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:50:15.168966 kubelet[2261]: E0307 01:50:15.168903 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:50:15.172943 kubelet[2261]: E0307 01:50:15.172888 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:50:15.188260 kubelet[2261]: I0307 01:50:15.183969 2261 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:50:15.188260 kubelet[2261]: I0307 01:50:15.187895 2261 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:50:15.218705 kubelet[2261]: W0307 01:50:15.212568 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:50:15.275179 kubelet[2261]: I0307 01:50:15.268254 2261 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:50:15.275179 kubelet[2261]: I0307 01:50:15.268603 2261 server.go:1289] "Started kubelet" Mar 7 01:50:15.275584 kubelet[2261]: I0307 01:50:15.275541 2261 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:50:15.329401 kubelet[2261]: I0307 01:50:15.275919 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:50:15.386357 kubelet[2261]: I0307 01:50:15.383336 2261 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:50:15.387915 kubelet[2261]: I0307 01:50:15.387886 2261 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:50:15.418525 kubelet[2261]: E0307 01:50:15.384309 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c0f0fc39172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,LastTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:50:15.432192 kubelet[2261]: E0307 01:50:15.432153 2261 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:50:15.455867 kubelet[2261]: I0307 01:50:15.455694 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:50:15.458203 kubelet[2261]: I0307 01:50:15.456936 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:50:15.468861 kubelet[2261]: I0307 01:50:15.463418 2261 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:50:15.468861 kubelet[2261]: E0307 01:50:15.464150 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:15.468861 kubelet[2261]: I0307 01:50:15.465142 2261 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:50:15.468861 kubelet[2261]: I0307 01:50:15.465394 2261 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:50:15.480764 kubelet[2261]: E0307 01:50:15.479423 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Mar 7 01:50:15.487924 kubelet[2261]: E0307 01:50:15.482772 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:50:15.487924 kubelet[2261]: I0307 01:50:15.485801 2261 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:50:15.487924 kubelet[2261]: I0307 01:50:15.485922 2261 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:50:15.488380 kubelet[2261]: I0307 01:50:15.488275 2261 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:50:15.605440 kubelet[2261]: E0307 01:50:15.604925 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:15.708244 kubelet[2261]: E0307 01:50:15.706814 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:15.710138 kubelet[2261]: E0307 01:50:15.709867 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Mar 7 01:50:15.826839 kubelet[2261]: I0307 01:50:15.814796 2261 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:50:15.923467 kubelet[2261]: I0307 01:50:15.883308 2261 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:50:15.923467 kubelet[2261]: I0307 01:50:15.883931 2261 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:50:15.923467 kubelet[2261]: E0307 01:50:15.841839 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:16.038963 kubelet[2261]: E0307 01:50:16.038318 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:16.045485 kubelet[2261]: E0307 01:50:16.043463 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c0f0fc39172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,LastTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:50:16.045485 kubelet[2261]: I0307 01:50:16.044922 2261 policy_none.go:49] "None policy: Start" Mar 7 01:50:16.045485 kubelet[2261]: I0307 01:50:16.045464 2261 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:50:16.045485 kubelet[2261]: I0307 01:50:16.045706 2261 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:50:16.115091 kubelet[2261]: E0307 01:50:16.113895 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Mar 7 01:50:16.115476 kubelet[2261]: E0307 01:50:16.115259 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:50:16.134718 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:50:16.151241 kubelet[2261]: E0307 01:50:16.150277 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:16.208845 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:50:16.219289 kubelet[2261]: I0307 01:50:16.218887 2261 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:50:16.220914 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:50:16.235355 kubelet[2261]: I0307 01:50:16.235117 2261 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:50:16.236876 kubelet[2261]: I0307 01:50:16.236784 2261 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:50:16.237360 kubelet[2261]: I0307 01:50:16.237218 2261 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:50:16.237360 kubelet[2261]: I0307 01:50:16.237354 2261 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:50:16.237780 kubelet[2261]: E0307 01:50:16.237429 2261 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:50:16.254166 kubelet[2261]: E0307 01:50:16.250804 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:50:16.254166 kubelet[2261]: E0307 01:50:16.252525 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:16.288422 kubelet[2261]: E0307 01:50:16.287869 2261 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:50:16.322555 kubelet[2261]: I0307 01:50:16.322317 2261 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:50:16.325157 kubelet[2261]: I0307 01:50:16.324431 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:50:16.327526 kubelet[2261]: I0307 01:50:16.327352 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:50:16.332370 kubelet[2261]: E0307 01:50:16.332344 2261 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:50:16.334295 kubelet[2261]: E0307 01:50:16.334273 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:50:16.382631 kubelet[2261]: I0307 01:50:16.382582 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:50:16.457838 kubelet[2261]: I0307 01:50:16.457481 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:16.461438 kubelet[2261]: E0307 01:50:16.460118 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:16.484968 kubelet[2261]: I0307 01:50:16.484390 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:50:16.484968 kubelet[2261]: I0307 01:50:16.484440 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:50:16.484968 kubelet[2261]: I0307 01:50:16.484606 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:50:16.509404 kubelet[2261]: E0307 01:50:16.509368 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:50:16.515413 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 7 01:50:16.547897 kubelet[2261]: E0307 01:50:16.543497 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:16.547897 kubelet[2261]: E0307 01:50:16.544533 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:16.548602 containerd[1460]: time="2026-03-07T01:50:16.548455052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:16.574550 systemd[1]: Created slice kubepods-burstable-pod6f13adfe9a9566b07dae44c4da0fdcbd.slice - libcontainer container kubepods-burstable-pod6f13adfe9a9566b07dae44c4da0fdcbd.slice. Mar 7 01:50:16.584126 kubelet[2261]: E0307 01:50:16.582559 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:16.587732 kubelet[2261]: I0307 01:50:16.586256 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:50:16.587732 kubelet[2261]: I0307 01:50:16.586355 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:50:16.587732 kubelet[2261]: I0307 01:50:16.586391 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:50:16.587732 kubelet[2261]: I0307 01:50:16.586471 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:50:16.587732 kubelet[2261]: I0307 01:50:16.586499 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:50:16.639583 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 7 01:50:16.656732 kubelet[2261]: E0307 01:50:16.655453 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:16.677579 kubelet[2261]: I0307 01:50:16.677260 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:16.678349 kubelet[2261]: E0307 01:50:16.678215 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:16.804596 kubelet[2261]: E0307 01:50:16.803868 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:50:16.867861 kubelet[2261]: E0307 01:50:16.866750 2261 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:50:16.884833 kubelet[2261]: E0307 01:50:16.884452 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:16.889712 containerd[1460]: time="2026-03-07T01:50:16.888801081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f13adfe9a9566b07dae44c4da0fdcbd,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:16.917163 kubelet[2261]: E0307 01:50:16.916572 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="1.6s" Mar 7 01:50:16.976222 kubelet[2261]: E0307 01:50:16.975354 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:16.978403 containerd[1460]: time="2026-03-07T01:50:16.977956757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:17.107976 kubelet[2261]: I0307 01:50:17.107407 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:17.108601 kubelet[2261]: E0307 01:50:17.108444 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:17.681173 kubelet[2261]: E0307 01:50:17.680782 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:50:17.939649 kubelet[2261]: I0307 01:50:17.939453 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:17.942145 kubelet[2261]: E0307 01:50:17.941836 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:17.941862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239452637.mount: Deactivated successfully. Mar 7 01:50:18.003823 containerd[1460]: time="2026-03-07T01:50:17.989215411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:50:18.028624 containerd[1460]: time="2026-03-07T01:50:18.027215644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:50:18.043414 containerd[1460]: time="2026-03-07T01:50:18.040646075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:50:18.043414 containerd[1460]: time="2026-03-07T01:50:18.042493844Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:50:18.050236 containerd[1460]: time="2026-03-07T01:50:18.047867375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:50:18.053163 containerd[1460]: time="2026-03-07T01:50:18.052874491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:50:18.057144 containerd[1460]: time="2026-03-07T01:50:18.056465882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:50:18.076620 containerd[1460]: time="2026-03-07T01:50:18.076242161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:50:18.083172 containerd[1460]: time="2026-03-07T01:50:18.081223185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.102992031s" Mar 7 01:50:18.083172 containerd[1460]: time="2026-03-07T01:50:18.082901446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.193993002s" Mar 7 01:50:18.089455 containerd[1460]: time="2026-03-07T01:50:18.087201118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.538645987s" Mar 7 01:50:18.525462 kubelet[2261]: E0307 01:50:18.524930 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="3.2s" Mar 7 01:50:18.530186 kubelet[2261]: E0307 01:50:18.529455 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:50:18.865773 kubelet[2261]: E0307 01:50:18.865633 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:50:19.300633 kubelet[2261]: E0307 01:50:19.294982 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:50:19.628598 kubelet[2261]: I0307 01:50:19.624409 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:19.628598 kubelet[2261]: E0307 01:50:19.625946 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:19.712530 kubelet[2261]: E0307 01:50:19.711465 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:50:21.224196 kubelet[2261]: E0307 01:50:21.218174 2261 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:50:21.751126 kubelet[2261]: E0307 01:50:21.750412 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="6.4s" Mar 7 01:50:22.372622 containerd[1460]: time="2026-03-07T01:50:22.369693283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:50:22.386528 containerd[1460]: time="2026-03-07T01:50:22.375704919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:50:22.386528 containerd[1460]: time="2026-03-07T01:50:22.386404928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.388953 containerd[1460]: time="2026-03-07T01:50:22.387398915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.427600 containerd[1460]: time="2026-03-07T01:50:22.339777477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:50:22.427600 containerd[1460]: time="2026-03-07T01:50:22.427254351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:50:22.427600 containerd[1460]: time="2026-03-07T01:50:22.427274388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.432951 containerd[1460]: time="2026-03-07T01:50:22.430537706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.463076 containerd[1460]: time="2026-03-07T01:50:22.461269193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:50:22.463076 containerd[1460]: time="2026-03-07T01:50:22.462133415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:50:22.463076 containerd[1460]: time="2026-03-07T01:50:22.462378665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.463681 containerd[1460]: time="2026-03-07T01:50:22.463128964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:22.921936 kubelet[2261]: I0307 01:50:22.921510 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:22.925199 kubelet[2261]: E0307 01:50:22.922543 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 7 01:50:23.061592 systemd[1]: run-containerd-runc-k8s.io-524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa-runc.0R0oYl.mount: Deactivated successfully. Mar 7 01:50:23.146129 systemd[1]: Started cri-containerd-524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa.scope - libcontainer container 524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa. Mar 7 01:50:23.193629 systemd[1]: Started cri-containerd-eb47d84e83649cb63890f4ebd0799e8f9869c9332a2dcde1cb72d18b0596245a.scope - libcontainer container eb47d84e83649cb63890f4ebd0799e8f9869c9332a2dcde1cb72d18b0596245a. Mar 7 01:50:23.258101 kubelet[2261]: E0307 01:50:23.257629 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:50:23.275635 systemd[1]: Started cri-containerd-d452104147c675fbca0aceccd40bde5c96cc3ae0a4719ab5a918c7f228ad72cb.scope - libcontainer container d452104147c675fbca0aceccd40bde5c96cc3ae0a4719ab5a918c7f228ad72cb. Mar 7 01:50:23.575503 kubelet[2261]: E0307 01:50:23.574945 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:50:23.845235 containerd[1460]: time="2026-03-07T01:50:23.837420124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"d452104147c675fbca0aceccd40bde5c96cc3ae0a4719ab5a918c7f228ad72cb\"" Mar 7 01:50:23.850727 containerd[1460]: time="2026-03-07T01:50:23.846923268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f13adfe9a9566b07dae44c4da0fdcbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa\"" Mar 7 01:50:23.879321 kubelet[2261]: E0307 01:50:23.870248 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:23.879321 kubelet[2261]: E0307 01:50:23.872380 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:23.936607 kubelet[2261]: E0307 01:50:23.935459 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:50:23.955186 containerd[1460]: time="2026-03-07T01:50:23.954495036Z" level=info msg="CreateContainer within sandbox \"524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:50:23.975310 containerd[1460]: time="2026-03-07T01:50:23.974520230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb47d84e83649cb63890f4ebd0799e8f9869c9332a2dcde1cb72d18b0596245a\"" Mar 7 01:50:23.994914 kubelet[2261]: E0307 01:50:23.993849 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:24.007163 containerd[1460]: time="2026-03-07T01:50:24.005276931Z" level=info msg="CreateContainer within sandbox \"d452104147c675fbca0aceccd40bde5c96cc3ae0a4719ab5a918c7f228ad72cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:50:24.025915 containerd[1460]: time="2026-03-07T01:50:24.025342920Z" level=info msg="CreateContainer within sandbox \"eb47d84e83649cb63890f4ebd0799e8f9869c9332a2dcde1cb72d18b0596245a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:50:24.527678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483701092.mount: Deactivated successfully. Mar 7 01:50:24.573233 kubelet[2261]: E0307 01:50:24.547356 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:50:24.573691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749975967.mount: Deactivated successfully. Mar 7 01:50:24.626415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651952723.mount: Deactivated successfully. Mar 7 01:50:24.646649 containerd[1460]: time="2026-03-07T01:50:24.646352706Z" level=info msg="CreateContainer within sandbox \"524aeeaff57b6a31bd0b46193d1cbb7decb2e641243be6e2abf636fa104b5dfa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1111199f756a309af3299c6104a8ea2bea7dd5c8abdb820ac64bbde6061b1070\"" Mar 7 01:50:24.657452 containerd[1460]: time="2026-03-07T01:50:24.656374910Z" level=info msg="StartContainer for \"1111199f756a309af3299c6104a8ea2bea7dd5c8abdb820ac64bbde6061b1070\"" Mar 7 01:50:24.779919 containerd[1460]: time="2026-03-07T01:50:24.769431284Z" level=info msg="CreateContainer within sandbox \"d452104147c675fbca0aceccd40bde5c96cc3ae0a4719ab5a918c7f228ad72cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5a58e843b305008fb11daa867de1d6312527a6dcdab96664bddde59148d4512e\"" Mar 7 01:50:24.867630 containerd[1460]: time="2026-03-07T01:50:24.867482940Z" level=info msg="StartContainer for \"5a58e843b305008fb11daa867de1d6312527a6dcdab96664bddde59148d4512e\"" Mar 7 01:50:24.944951 containerd[1460]: time="2026-03-07T01:50:24.944222378Z" level=info msg="CreateContainer within sandbox \"eb47d84e83649cb63890f4ebd0799e8f9869c9332a2dcde1cb72d18b0596245a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11ffa5dbbb973bc8b642835b5f4f15aa89f3b0422eac9ae78f964ce8753cafc1\"" Mar 7 01:50:24.958121 containerd[1460]: time="2026-03-07T01:50:24.957602194Z" level=info msg="StartContainer for \"11ffa5dbbb973bc8b642835b5f4f15aa89f3b0422eac9ae78f964ce8753cafc1\"" Mar 7 01:50:25.150108 systemd[1]: Started cri-containerd-1111199f756a309af3299c6104a8ea2bea7dd5c8abdb820ac64bbde6061b1070.scope - libcontainer container 1111199f756a309af3299c6104a8ea2bea7dd5c8abdb820ac64bbde6061b1070. Mar 7 01:50:25.367854 systemd[1]: Started cri-containerd-5a58e843b305008fb11daa867de1d6312527a6dcdab96664bddde59148d4512e.scope - libcontainer container 5a58e843b305008fb11daa867de1d6312527a6dcdab96664bddde59148d4512e. Mar 7 01:50:26.086534 kubelet[2261]: E0307 01:50:26.086272 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c0f0fc39172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,LastTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:50:26.540117 kubelet[2261]: E0307 01:50:26.527754 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:50:27.082760 containerd[1460]: time="2026-03-07T01:50:26.981455368Z" level=info msg="StartContainer for \"5a58e843b305008fb11daa867de1d6312527a6dcdab96664bddde59148d4512e\" returns successfully" Mar 7 01:50:27.082760 containerd[1460]: time="2026-03-07T01:50:26.982162334Z" level=info msg="StartContainer for \"1111199f756a309af3299c6104a8ea2bea7dd5c8abdb820ac64bbde6061b1070\" returns successfully" Mar 7 01:50:27.252087 kubelet[2261]: E0307 01:50:27.249911 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:27.252087 kubelet[2261]: E0307 01:50:27.250392 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:27.330926 systemd[1]: Started cri-containerd-11ffa5dbbb973bc8b642835b5f4f15aa89f3b0422eac9ae78f964ce8753cafc1.scope - libcontainer container 11ffa5dbbb973bc8b642835b5f4f15aa89f3b0422eac9ae78f964ce8753cafc1. Mar 7 01:50:27.339169 kubelet[2261]: E0307 01:50:27.337541 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:27.339169 kubelet[2261]: E0307 01:50:27.337723 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:27.839632 containerd[1460]: time="2026-03-07T01:50:27.839504481Z" level=info msg="StartContainer for \"11ffa5dbbb973bc8b642835b5f4f15aa89f3b0422eac9ae78f964ce8753cafc1\" returns successfully" Mar 7 01:50:28.154115 kubelet[2261]: E0307 01:50:28.153309 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="7s" Mar 7 01:50:29.897643 kubelet[2261]: E0307 01:50:29.875364 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:29.897643 kubelet[2261]: E0307 01:50:29.875758 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:29.946368 kubelet[2261]: I0307 01:50:29.933520 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:29.957150 kubelet[2261]: E0307 01:50:29.956169 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:29.961216 kubelet[2261]: E0307 01:50:29.959574 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:29.961644 kubelet[2261]: E0307 01:50:29.961608 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:29.964631 kubelet[2261]: E0307 01:50:29.964463 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:30.960790 kubelet[2261]: E0307 01:50:30.960586 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:30.972148 kubelet[2261]: E0307 01:50:30.968490 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:31.031768 kubelet[2261]: E0307 01:50:31.031655 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:31.033621 kubelet[2261]: E0307 01:50:31.033595 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:31.907198 kubelet[2261]: E0307 01:50:31.886911 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:31.919528 kubelet[2261]: E0307 01:50:31.919303 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:32.018260 kubelet[2261]: E0307 01:50:32.017957 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:32.022967 kubelet[2261]: E0307 01:50:32.022937 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:33.018332 kubelet[2261]: E0307 01:50:33.017722 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:33.028241 kubelet[2261]: E0307 01:50:33.022619 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:36.106449 kubelet[2261]: E0307 01:50:36.106273 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:36.112197 kubelet[2261]: E0307 01:50:36.106867 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:36.539461 kubelet[2261]: E0307 01:50:36.530880 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:50:39.857606 kubelet[2261]: E0307 01:50:39.856451 2261 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:50:40.020574 kubelet[2261]: E0307 01:50:40.020167 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 7 01:50:41.614753 kubelet[2261]: E0307 01:50:41.613534 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:50:41.869408 kubelet[2261]: E0307 01:50:41.862828 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:41.869408 kubelet[2261]: E0307 01:50:41.863361 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:41.909489 kubelet[2261]: E0307 01:50:41.909302 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:50:42.748355 kubelet[2261]: E0307 01:50:42.737926 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:50:43.027411 kubelet[2261]: E0307 01:50:43.027180 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:50:45.248804 kubelet[2261]: E0307 01:50:45.243873 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 7 01:50:46.207929 kubelet[2261]: E0307 01:50:46.173405 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a6c0f0fc39172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,LastTimestamp:2026-03-07 01:50:15.26849573 +0000 UTC m=+6.025294399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:50:46.538734 kubelet[2261]: E0307 01:50:46.536521 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:50:47.051756 kubelet[2261]: I0307 01:50:47.049721 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:50:51.361308 kubelet[2261]: E0307 01:50:51.358521 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:50:51.373815 kubelet[2261]: E0307 01:50:51.373635 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:51.996287 kubelet[2261]: I0307 01:50:51.987445 2261 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:50:51.996287 kubelet[2261]: E0307 01:50:51.987551 2261 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:50:52.766120 kubelet[2261]: E0307 01:50:52.764709 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:53.324072 kubelet[2261]: E0307 01:50:53.317340 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:53.420882 kubelet[2261]: E0307 01:50:53.418794 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:53.532894 kubelet[2261]: E0307 01:50:53.531629 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:53.879554 kubelet[2261]: E0307 01:50:53.875417 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:53.986614 kubelet[2261]: E0307 01:50:53.984490 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.166615 kubelet[2261]: E0307 01:50:54.153822 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.260494 kubelet[2261]: E0307 01:50:54.260329 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.361802 kubelet[2261]: E0307 01:50:54.360780 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.467793 kubelet[2261]: E0307 01:50:54.463780 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.578587 kubelet[2261]: E0307 01:50:54.568353 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.675538 kubelet[2261]: E0307 01:50:54.673667 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.780585 kubelet[2261]: E0307 01:50:54.779534 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:54.886871 kubelet[2261]: E0307 01:50:54.886346 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.086883 kubelet[2261]: E0307 01:50:55.013241 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.173455 kubelet[2261]: E0307 01:50:55.171619 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.273242 kubelet[2261]: E0307 01:50:55.272704 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.452699 kubelet[2261]: E0307 01:50:55.440485 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.546176 kubelet[2261]: E0307 01:50:55.545647 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.657428 kubelet[2261]: E0307 01:50:55.656394 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.784973 kubelet[2261]: E0307 01:50:55.783605 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.884652 kubelet[2261]: E0307 01:50:55.884560 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:55.989089 kubelet[2261]: E0307 01:50:55.988238 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.113531 kubelet[2261]: E0307 01:50:56.105559 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.238262 kubelet[2261]: E0307 01:50:56.224196 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.326780 kubelet[2261]: E0307 01:50:56.326544 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.471530 kubelet[2261]: E0307 01:50:56.461551 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.544384 kubelet[2261]: E0307 01:50:56.540595 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:50:56.589117 kubelet[2261]: E0307 01:50:56.583969 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.706092 kubelet[2261]: E0307 01:50:56.704212 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.823310 kubelet[2261]: E0307 01:50:56.822599 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:56.928390 kubelet[2261]: E0307 01:50:56.927223 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.031506 kubelet[2261]: E0307 01:50:57.030936 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.285923 kubelet[2261]: E0307 01:50:57.246606 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.412115 kubelet[2261]: E0307 01:50:57.411747 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.542497 kubelet[2261]: E0307 01:50:57.539209 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.658660 kubelet[2261]: E0307 01:50:57.655406 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.769956 kubelet[2261]: E0307 01:50:57.766822 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.870080 kubelet[2261]: E0307 01:50:57.867705 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:57.971271 kubelet[2261]: E0307 01:50:57.968528 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.073247 kubelet[2261]: E0307 01:50:58.072785 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.176320 kubelet[2261]: E0307 01:50:58.174582 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.279728 kubelet[2261]: E0307 01:50:58.277220 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.408418 kubelet[2261]: E0307 01:50:58.400725 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.566592 kubelet[2261]: E0307 01:50:58.551542 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.726236 kubelet[2261]: E0307 01:50:58.653894 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.858734 kubelet[2261]: E0307 01:50:58.842952 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:58.958454 kubelet[2261]: E0307 01:50:58.955684 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.127367 kubelet[2261]: E0307 01:50:59.121651 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.272092 kubelet[2261]: E0307 01:50:59.249872 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.423342 kubelet[2261]: E0307 01:50:59.385764 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.528787 kubelet[2261]: E0307 01:50:59.527543 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.644367 kubelet[2261]: E0307 01:50:59.642357 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.745106 kubelet[2261]: E0307 01:50:59.742628 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:50:59.906318 kubelet[2261]: E0307 01:50:59.883663 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.055361 kubelet[2261]: E0307 01:51:00.053111 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.178379 kubelet[2261]: E0307 01:51:00.161068 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.333251 kubelet[2261]: E0307 01:51:00.331109 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.453445 kubelet[2261]: E0307 01:51:00.449964 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.550589 kubelet[2261]: E0307 01:51:00.550543 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.659864 kubelet[2261]: E0307 01:51:00.653587 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.754617 kubelet[2261]: E0307 01:51:00.754273 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:00.858401 kubelet[2261]: E0307 01:51:00.855840 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.000241 kubelet[2261]: E0307 01:51:00.986683 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.087748 kubelet[2261]: E0307 01:51:01.087681 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.196281 kubelet[2261]: E0307 01:51:01.191268 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.329555 kubelet[2261]: E0307 01:51:01.310775 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.440402 kubelet[2261]: E0307 01:51:01.438343 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.547357 kubelet[2261]: E0307 01:51:01.543888 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.646286 kubelet[2261]: E0307 01:51:01.645282 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.748976 kubelet[2261]: E0307 01:51:01.748872 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:01.948598 kubelet[2261]: E0307 01:51:01.931720 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.045312 kubelet[2261]: E0307 01:51:02.045240 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.167970 kubelet[2261]: E0307 01:51:02.167445 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.273677 kubelet[2261]: E0307 01:51:02.269716 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.375333 kubelet[2261]: E0307 01:51:02.371468 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.473567 kubelet[2261]: E0307 01:51:02.472924 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.577169 kubelet[2261]: E0307 01:51:02.577125 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.682119 kubelet[2261]: E0307 01:51:02.680599 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.788106 kubelet[2261]: E0307 01:51:02.787344 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:02.949779 kubelet[2261]: E0307 01:51:02.948290 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:03.178714 kubelet[2261]: E0307 01:51:03.166581 2261 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:51:03.338366 kubelet[2261]: E0307 01:51:03.334109 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:03.436936 kubelet[2261]: E0307 01:51:03.436574 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:03.730349 kubelet[2261]: E0307 01:51:03.548302 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:03.811264 kubelet[2261]: E0307 01:51:03.808339 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:03.969776 kubelet[2261]: E0307 01:51:03.968847 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.117847 kubelet[2261]: E0307 01:51:04.097757 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.216842 kubelet[2261]: E0307 01:51:04.213663 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.415651 kubelet[2261]: E0307 01:51:04.368610 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.551836 kubelet[2261]: E0307 01:51:04.551493 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.662868 kubelet[2261]: E0307 01:51:04.659521 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.878104 kubelet[2261]: E0307 01:51:04.876466 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:04.981139 kubelet[2261]: E0307 01:51:04.980797 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.088569 kubelet[2261]: E0307 01:51:05.088473 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.352719 kubelet[2261]: E0307 01:51:05.322738 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.452924 kubelet[2261]: E0307 01:51:05.452797 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.703217 kubelet[2261]: E0307 01:51:05.679154 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.803587 kubelet[2261]: E0307 01:51:05.803166 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:05.913721 kubelet[2261]: E0307 01:51:05.913138 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.079925 kubelet[2261]: E0307 01:51:06.042204 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.144475 kubelet[2261]: E0307 01:51:06.143798 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.265122 kubelet[2261]: E0307 01:51:06.259882 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.374337 kubelet[2261]: E0307 01:51:06.369570 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.524212 kubelet[2261]: E0307 01:51:06.496481 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:51:06.550417 kubelet[2261]: E0307 01:51:06.542927 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:51:06.649962 kubelet[2261]: I0307 01:51:06.648210 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:06.685529 kubelet[2261]: I0307 01:51:06.682851 2261 apiserver.go:52] "Watching apiserver" Mar 7 01:51:06.914466 kubelet[2261]: I0307 01:51:06.904812 2261 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:51:07.268164 kubelet[2261]: E0307 01:51:07.261979 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:07.763808 kubelet[2261]: I0307 01:51:07.763683 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:51:08.102522 kubelet[2261]: E0307 01:51:08.093908 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:08.358572 kubelet[2261]: I0307 01:51:08.357898 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:51:08.572765 kubelet[2261]: E0307 01:51:08.572704 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:09.280577 kubelet[2261]: E0307 01:51:09.269255 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:09.920913 kubelet[2261]: I0307 01:51:09.918562 2261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.918413145 podStartE2EDuration="2.918413145s" podCreationTimestamp="2026-03-07 01:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:51:09.679406897 +0000 UTC m=+60.436205576" watchObservedRunningTime="2026-03-07 01:51:09.918413145 +0000 UTC m=+60.675211885" Mar 7 01:51:09.920913 kubelet[2261]: I0307 01:51:09.924252 2261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.919274113 podStartE2EDuration="1.919274113s" podCreationTimestamp="2026-03-07 01:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:51:09.916504507 +0000 UTC m=+60.673303186" watchObservedRunningTime="2026-03-07 01:51:09.919274113 +0000 UTC m=+60.676072771" Mar 7 01:51:10.368812 kubelet[2261]: I0307 01:51:10.368740 2261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.368713367 podStartE2EDuration="3.368713367s" podCreationTimestamp="2026-03-07 01:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:51:10.368523962 +0000 UTC m=+61.125322671" watchObservedRunningTime="2026-03-07 01:51:10.368713367 +0000 UTC m=+61.125512046" Mar 7 01:51:11.274500 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-5.scope)... Mar 7 01:51:11.274576 systemd[1]: Reloading... Mar 7 01:51:13.353777 zram_generator::config[2595]: No configuration found. Mar 7 01:51:15.232154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:51:16.452408 kubelet[2261]: E0307 01:51:16.452355 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:16.511397 systemd[1]: Reloading finished in 5229 ms. Mar 7 01:51:17.278410 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:17.352506 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:51:17.353153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:17.353242 systemd[1]: kubelet.service: Consumed 18.851s CPU time, 142.7M memory peak, 0B memory swap peak. Mar 7 01:51:17.388651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:18.875725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:18.917612 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:51:19.188206 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:51:19.188206 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:51:19.188206 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:51:19.188206 kubelet[2641]: I0307 01:51:19.187435 2641 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:51:19.221643 kubelet[2641]: I0307 01:51:19.219336 2641 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:51:19.221643 kubelet[2641]: I0307 01:51:19.219425 2641 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:51:19.230310 kubelet[2641]: I0307 01:51:19.226821 2641 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:51:19.266814 kubelet[2641]: I0307 01:51:19.265385 2641 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:51:19.282534 kubelet[2641]: I0307 01:51:19.282364 2641 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:51:19.319889 kubelet[2641]: E0307 01:51:19.318402 2641 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:51:19.319889 kubelet[2641]: I0307 01:51:19.318447 2641 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:51:19.329632 kubelet[2641]: I0307 01:51:19.328575 2641 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:51:19.329632 kubelet[2641]: I0307 01:51:19.329145 2641 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:51:19.329632 kubelet[2641]: I0307 01:51:19.329191 2641 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:51:19.329632 kubelet[2641]: I0307 01:51:19.329385 2641 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329399 2641 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329534 2641 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329844 2641 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329862 2641 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329904 2641 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:51:19.330185 kubelet[2641]: I0307 01:51:19.329928 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:51:19.370848 kubelet[2641]: I0307 01:51:19.365792 2641 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:51:19.420423 kubelet[2641]: I0307 01:51:19.420380 2641 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:51:19.518128 kubelet[2641]: I0307 01:51:19.511131 2641 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:51:19.521386 kubelet[2641]: I0307 01:51:19.521244 2641 server.go:1289] "Started kubelet" Mar 7 01:51:19.531836 kubelet[2641]: I0307 01:51:19.524722 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:51:19.541780 kubelet[2641]: I0307 01:51:19.536401 2641 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:51:19.541780 kubelet[2641]: I0307 01:51:19.537774 2641 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:51:19.545224 kubelet[2641]: E0307 01:51:19.545190 2641 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:51:19.555133 kubelet[2641]: I0307 01:51:19.546842 2641 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:51:19.595147 kubelet[2641]: I0307 01:51:19.589114 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:51:19.607602 kubelet[2641]: I0307 01:51:19.605424 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:51:19.613635 kubelet[2641]: I0307 01:51:19.609818 2641 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:51:19.617112 kubelet[2641]: I0307 01:51:19.615311 2641 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:51:19.617112 kubelet[2641]: I0307 01:51:19.616221 2641 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:51:19.641533 kubelet[2641]: I0307 01:51:19.639667 2641 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:51:19.641533 kubelet[2641]: I0307 01:51:19.640373 2641 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:51:19.681814 kubelet[2641]: I0307 01:51:19.674423 2641 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:51:20.336462 kubelet[2641]: I0307 01:51:20.336307 2641 apiserver.go:52] "Watching apiserver" Mar 7 01:51:20.667485 kubelet[2641]: I0307 01:51:20.655146 2641 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:51:20.737631 kubelet[2641]: I0307 01:51:20.735107 2641 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:51:20.737631 kubelet[2641]: I0307 01:51:20.735239 2641 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:51:20.737631 kubelet[2641]: I0307 01:51:20.735432 2641 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:51:20.737631 kubelet[2641]: I0307 01:51:20.735449 2641 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:51:20.737631 kubelet[2641]: E0307 01:51:20.735807 2641 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:51:20.932256 kubelet[2641]: E0307 01:51:20.880392 2641 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:51:21.112952 kubelet[2641]: E0307 01:51:21.096340 2641 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:51:21.388494 kubelet[2641]: I0307 01:51:21.388316 2641 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:51:21.388494 kubelet[2641]: I0307 01:51:21.388390 2641 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:51:21.388494 kubelet[2641]: I0307 01:51:21.388424 2641 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:51:21.407695 kubelet[2641]: I0307 01:51:21.398330 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:51:21.408100 kubelet[2641]: I0307 01:51:21.407959 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:51:21.408219 kubelet[2641]: I0307 01:51:21.408204 2641 policy_none.go:49] "None policy: Start" Mar 7 01:51:21.408324 kubelet[2641]: I0307 01:51:21.408308 2641 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:51:21.408472 kubelet[2641]: I0307 01:51:21.408395 2641 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:51:21.433172 kubelet[2641]: I0307 01:51:21.431651 2641 state_mem.go:75] "Updated machine memory state" Mar 7 01:51:21.481415 kubelet[2641]: E0307 01:51:21.480263 2641 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:51:21.482925 kubelet[2641]: I0307 01:51:21.482802 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:51:21.483197 kubelet[2641]: I0307 01:51:21.482914 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:51:21.484832 kubelet[2641]: I0307 01:51:21.483883 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:51:21.517264 kubelet[2641]: I0307 01:51:21.515978 2641 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:51:21.523191 kubelet[2641]: E0307 01:51:21.523154 2641 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:51:21.534454 kubelet[2641]: I0307 01:51:21.532391 2641 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:51:21.543795 kubelet[2641]: I0307 01:51:21.540840 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:51:21.543795 kubelet[2641]: I0307 01:51:21.540914 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:51:21.543795 kubelet[2641]: I0307 01:51:21.540948 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:21.543795 kubelet[2641]: I0307 01:51:21.540974 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/586c770d-8784-4921-838d-9e222ea23282-xtables-lock\") pod \"kube-proxy-xsbxx\" (UID: \"586c770d-8784-4921-838d-9e222ea23282\") " pod="kube-system/kube-proxy-xsbxx" Mar 7 01:51:21.543795 kubelet[2641]: I0307 01:51:21.541108 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966r2\" (UniqueName: \"kubernetes.io/projected/586c770d-8784-4921-838d-9e222ea23282-kube-api-access-966r2\") pod \"kube-proxy-xsbxx\" (UID: \"586c770d-8784-4921-838d-9e222ea23282\") " pod="kube-system/kube-proxy-xsbxx" Mar 7 01:51:21.544307 containerd[1460]: time="2026-03-07T01:51:21.541861696Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:51:21.545167 kubelet[2641]: I0307 01:51:21.541139 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f13adfe9a9566b07dae44c4da0fdcbd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f13adfe9a9566b07dae44c4da0fdcbd\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:51:21.545167 kubelet[2641]: I0307 01:51:21.541175 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:21.545167 kubelet[2641]: I0307 01:51:21.541205 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:21.545167 kubelet[2641]: I0307 01:51:21.541232 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:21.545167 kubelet[2641]: I0307 01:51:21.541254 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:51:21.545456 kubelet[2641]: I0307 01:51:21.541276 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:51:21.546516 kubelet[2641]: I0307 01:51:21.542464 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:51:21.546516 kubelet[2641]: I0307 01:51:21.548645 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/586c770d-8784-4921-838d-9e222ea23282-kube-proxy\") pod \"kube-proxy-xsbxx\" (UID: \"586c770d-8784-4921-838d-9e222ea23282\") " pod="kube-system/kube-proxy-xsbxx" Mar 7 01:51:21.546516 kubelet[2641]: I0307 01:51:21.548782 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/586c770d-8784-4921-838d-9e222ea23282-lib-modules\") pod \"kube-proxy-xsbxx\" (UID: \"586c770d-8784-4921-838d-9e222ea23282\") " pod="kube-system/kube-proxy-xsbxx" Mar 7 01:51:21.782425 systemd[1]: Created slice kubepods-besteffort-pod586c770d_8784_4921_838d_9e222ea23282.slice - libcontainer container kubepods-besteffort-pod586c770d_8784_4921_838d_9e222ea23282.slice. Mar 7 01:51:21.823667 kubelet[2641]: I0307 01:51:21.815966 2641 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:51:21.900380 kubelet[2641]: E0307 01:51:21.891525 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:21.900380 kubelet[2641]: E0307 01:51:21.895403 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:21.924602 kubelet[2641]: E0307 01:51:21.922158 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:21.941097 kubelet[2641]: I0307 01:51:21.940332 2641 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:51:21.966137 kubelet[2641]: I0307 01:51:21.963683 2641 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:51:22.232129 kubelet[2641]: E0307 01:51:22.228292 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:22.249401 kubelet[2641]: E0307 01:51:22.248880 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:22.309224 containerd[1460]: time="2026-03-07T01:51:22.306899226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsbxx,Uid:586c770d-8784-4921-838d-9e222ea23282,Namespace:kube-system,Attempt:0,}" Mar 7 01:51:22.313298 kubelet[2641]: E0307 01:51:22.312700 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:23.473666 kubelet[2641]: E0307 01:51:23.321475 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:23.473666 kubelet[2641]: E0307 01:51:23.325228 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:23.695380 containerd[1460]: time="2026-03-07T01:51:23.683917403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:51:23.695380 containerd[1460]: time="2026-03-07T01:51:23.684526347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:51:23.695380 containerd[1460]: time="2026-03-07T01:51:23.684710002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:51:23.695380 containerd[1460]: time="2026-03-07T01:51:23.685957295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:51:25.128530 systemd[1]: Started cri-containerd-7af856ebf77bd62edc6d917518e236c987b0b333171dc65d83ad857f5d543b7b.scope - libcontainer container 7af856ebf77bd62edc6d917518e236c987b0b333171dc65d83ad857f5d543b7b. Mar 7 01:51:26.223503 containerd[1460]: time="2026-03-07T01:51:26.219523718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsbxx,Uid:586c770d-8784-4921-838d-9e222ea23282,Namespace:kube-system,Attempt:0,} returns sandbox id \"7af856ebf77bd62edc6d917518e236c987b0b333171dc65d83ad857f5d543b7b\"" Mar 7 01:51:26.239785 kubelet[2641]: E0307 01:51:26.235809 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:26.439064 containerd[1460]: time="2026-03-07T01:51:26.438715760Z" level=info msg="CreateContainer within sandbox \"7af856ebf77bd62edc6d917518e236c987b0b333171dc65d83ad857f5d543b7b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:51:26.668182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838940378.mount: Deactivated successfully. Mar 7 01:51:26.745213 containerd[1460]: time="2026-03-07T01:51:26.744959185Z" level=info msg="CreateContainer within sandbox \"7af856ebf77bd62edc6d917518e236c987b0b333171dc65d83ad857f5d543b7b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98b81e6131b1486e98686d670d8bce14141060cd5a211e78c59119c7063770db\"" Mar 7 01:51:26.749203 containerd[1460]: time="2026-03-07T01:51:26.746564634Z" level=info msg="StartContainer for \"98b81e6131b1486e98686d670d8bce14141060cd5a211e78c59119c7063770db\"" Mar 7 01:51:27.298578 kubelet[2641]: E0307 01:51:27.298429 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:27.300763 kubelet[2641]: E0307 01:51:27.300411 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:27.700525 systemd[1]: Started cri-containerd-98b81e6131b1486e98686d670d8bce14141060cd5a211e78c59119c7063770db.scope - libcontainer container 98b81e6131b1486e98686d670d8bce14141060cd5a211e78c59119c7063770db. Mar 7 01:51:28.262960 kubelet[2641]: E0307 01:51:28.252457 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:28.262960 kubelet[2641]: E0307 01:51:28.258151 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:28.727926 kubelet[2641]: E0307 01:51:28.724268 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:28.816929 containerd[1460]: time="2026-03-07T01:51:28.789540499Z" level=info msg="StartContainer for \"98b81e6131b1486e98686d670d8bce14141060cd5a211e78c59119c7063770db\" returns successfully" Mar 7 01:51:29.367175 kubelet[2641]: E0307 01:51:29.356114 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:30.441874 kubelet[2641]: E0307 01:51:30.441545 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:32.425884 kubelet[2641]: I0307 01:51:32.423433 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsbxx" podStartSLOduration=11.423406777 podStartE2EDuration="11.423406777s" podCreationTimestamp="2026-03-07 01:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:51:29.853330923 +0000 UTC m=+10.920940178" watchObservedRunningTime="2026-03-07 01:51:32.423406777 +0000 UTC m=+13.491016032" Mar 7 01:51:32.619185 systemd[1]: Created slice kubepods-burstable-podb495780f_b0f5_44bd_9ee1_2bd3abb047f2.slice - libcontainer container kubepods-burstable-podb495780f_b0f5_44bd_9ee1_2bd3abb047f2.slice. Mar 7 01:51:32.654834 kubelet[2641]: I0307 01:51:32.654218 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-cni-plugin\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.654834 kubelet[2641]: I0307 01:51:32.654345 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-cni\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.654834 kubelet[2641]: I0307 01:51:32.654379 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-run\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.654834 kubelet[2641]: I0307 01:51:32.654405 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-flannel-cfg\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.654834 kubelet[2641]: I0307 01:51:32.654435 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-xtables-lock\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.655341 kubelet[2641]: I0307 01:51:32.654462 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghjd\" (UniqueName: \"kubernetes.io/projected/b495780f-b0f5-44bd-9ee1-2bd3abb047f2-kube-api-access-vghjd\") pod \"kube-flannel-ds-z552v\" (UID: \"b495780f-b0f5-44bd-9ee1-2bd3abb047f2\") " pod="kube-flannel/kube-flannel-ds-z552v" Mar 7 01:51:32.956199 kubelet[2641]: E0307 01:51:32.954979 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:32.986426 containerd[1460]: time="2026-03-07T01:51:32.983976685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-z552v,Uid:b495780f-b0f5-44bd-9ee1-2bd3abb047f2,Namespace:kube-flannel,Attempt:0,}" Mar 7 01:51:33.234372 containerd[1460]: time="2026-03-07T01:51:33.233352907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:51:33.234372 containerd[1460]: time="2026-03-07T01:51:33.233483262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:51:33.234372 containerd[1460]: time="2026-03-07T01:51:33.233507417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:51:33.234372 containerd[1460]: time="2026-03-07T01:51:33.233656558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:51:33.352900 sudo[1597]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:33.401410 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:33.409937 systemd[1]: Started cri-containerd-dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8.scope - libcontainer container dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8. Mar 7 01:51:33.410931 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:38774.service: Deactivated successfully. Mar 7 01:51:33.416275 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:51:33.416811 systemd[1]: session-5.scope: Consumed 28.523s CPU time, 169.6M memory peak, 0B memory swap peak. Mar 7 01:51:33.420868 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:51:33.425811 systemd-logind[1442]: Removed session 5. Mar 7 01:51:34.138224 containerd[1460]: time="2026-03-07T01:51:34.136393110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-z552v,Uid:b495780f-b0f5-44bd-9ee1-2bd3abb047f2,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\"" Mar 7 01:51:34.142469 kubelet[2641]: E0307 01:51:34.140554 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:34.145281 containerd[1460]: time="2026-03-07T01:51:34.145164161Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 7 01:51:37.587313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119094062.mount: Deactivated successfully. Mar 7 01:51:38.052628 containerd[1460]: time="2026-03-07T01:51:38.052455033Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:38.063258 containerd[1460]: time="2026-03-07T01:51:38.056218875Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 7 01:51:38.063258 containerd[1460]: time="2026-03-07T01:51:38.057877917Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:38.073387 containerd[1460]: time="2026-03-07T01:51:38.070705938Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:38.075367 containerd[1460]: time="2026-03-07T01:51:38.075305435Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.930050183s" Mar 7 01:51:38.075622 containerd[1460]: time="2026-03-07T01:51:38.075488178Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 7 01:51:38.114733 containerd[1460]: time="2026-03-07T01:51:38.109646188Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 7 01:51:38.267081 containerd[1460]: time="2026-03-07T01:51:38.266390999Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba\"" Mar 7 01:51:38.272298 containerd[1460]: time="2026-03-07T01:51:38.268093125Z" level=info msg="StartContainer for \"0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba\"" Mar 7 01:51:38.441879 systemd[1]: Started cri-containerd-0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba.scope - libcontainer container 0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba. Mar 7 01:51:38.851771 kubelet[2641]: E0307 01:51:38.839770 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:38.870768 systemd[1]: cri-containerd-0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba.scope: Deactivated successfully. Mar 7 01:51:38.889422 containerd[1460]: time="2026-03-07T01:51:38.888781104Z" level=info msg="StartContainer for \"0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba\" returns successfully" Mar 7 01:51:39.091245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba-rootfs.mount: Deactivated successfully. Mar 7 01:51:39.273195 containerd[1460]: time="2026-03-07T01:51:39.267526383Z" level=info msg="shim disconnected" id=0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba namespace=k8s.io Mar 7 01:51:39.273195 containerd[1460]: time="2026-03-07T01:51:39.268621419Z" level=warning msg="cleaning up after shim disconnected" id=0eb4349a16b1c2470396ce63de23c2eeaca99fd7a629601101c9ff956cf8beba namespace=k8s.io Mar 7 01:51:39.273195 containerd[1460]: time="2026-03-07T01:51:39.268650503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:51:39.720097 kubelet[2641]: E0307 01:51:39.718896 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:39.723586 containerd[1460]: time="2026-03-07T01:51:39.721372139Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 7 01:51:51.963890 containerd[1460]: time="2026-03-07T01:51:51.962610433Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:51.968600 containerd[1460]: time="2026-03-07T01:51:51.968156665Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 7 01:51:51.973868 containerd[1460]: time="2026-03-07T01:51:51.973706492Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:52.033900 containerd[1460]: time="2026-03-07T01:51:52.033772050Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:51:52.043171 containerd[1460]: time="2026-03-07T01:51:52.042387568Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 12.320972797s" Mar 7 01:51:52.043171 containerd[1460]: time="2026-03-07T01:51:52.042470633Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 7 01:51:52.082737 containerd[1460]: time="2026-03-07T01:51:52.082691201Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:51:52.232506 containerd[1460]: time="2026-03-07T01:51:52.229745178Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e\"" Mar 7 01:51:52.232506 containerd[1460]: time="2026-03-07T01:51:52.231258197Z" level=info msg="StartContainer for \"8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e\"" Mar 7 01:51:52.438741 systemd[1]: Started cri-containerd-8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e.scope - libcontainer container 8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e. Mar 7 01:51:52.667083 systemd[1]: cri-containerd-8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e.scope: Deactivated successfully. Mar 7 01:51:52.687071 containerd[1460]: time="2026-03-07T01:51:52.672353795Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb495780f_b0f5_44bd_9ee1_2bd3abb047f2.slice/cri-containerd-8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e.scope/memory.events\": no such file or directory" Mar 7 01:51:52.706226 kubelet[2641]: I0307 01:51:52.703831 2641 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:51:52.753401 containerd[1460]: time="2026-03-07T01:51:52.753288093Z" level=info msg="StartContainer for \"8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e\" returns successfully" Mar 7 01:51:52.867267 systemd[1]: Created slice kubepods-burstable-pod9d56443a_7f68_4fb2_b37b_73b9d6bcd8bd.slice - libcontainer container kubepods-burstable-pod9d56443a_7f68_4fb2_b37b_73b9d6bcd8bd.slice. Mar 7 01:51:52.871220 kubelet[2641]: E0307 01:51:52.870883 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:52.873379 kubelet[2641]: I0307 01:51:52.873337 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd-config-volume\") pod \"coredns-674b8bbfcf-p2npv\" (UID: \"9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd\") " pod="kube-system/coredns-674b8bbfcf-p2npv" Mar 7 01:51:52.875711 kubelet[2641]: I0307 01:51:52.873408 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gb64\" (UniqueName: \"kubernetes.io/projected/9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd-kube-api-access-5gb64\") pod \"coredns-674b8bbfcf-p2npv\" (UID: \"9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd\") " pod="kube-system/coredns-674b8bbfcf-p2npv" Mar 7 01:51:52.911810 systemd[1]: Created slice kubepods-burstable-pod57aa6722_9d0c_497d_aa4c_01ee72e33936.slice - libcontainer container kubepods-burstable-pod57aa6722_9d0c_497d_aa4c_01ee72e33936.slice. Mar 7 01:51:52.973726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e-rootfs.mount: Deactivated successfully. Mar 7 01:51:52.985771 kubelet[2641]: I0307 01:51:52.984269 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57aa6722-9d0c-497d-aa4c-01ee72e33936-config-volume\") pod \"coredns-674b8bbfcf-j8hww\" (UID: \"57aa6722-9d0c-497d-aa4c-01ee72e33936\") " pod="kube-system/coredns-674b8bbfcf-j8hww" Mar 7 01:51:52.985771 kubelet[2641]: I0307 01:51:52.984331 2641 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvp67\" (UniqueName: \"kubernetes.io/projected/57aa6722-9d0c-497d-aa4c-01ee72e33936-kube-api-access-hvp67\") pod \"coredns-674b8bbfcf-j8hww\" (UID: \"57aa6722-9d0c-497d-aa4c-01ee72e33936\") " pod="kube-system/coredns-674b8bbfcf-j8hww" Mar 7 01:51:53.014886 containerd[1460]: time="2026-03-07T01:51:53.013093241Z" level=info msg="shim disconnected" id=8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e namespace=k8s.io Mar 7 01:51:53.014886 containerd[1460]: time="2026-03-07T01:51:53.013174724Z" level=warning msg="cleaning up after shim disconnected" id=8db2f6cca182fb293ed1892314d278ab434b87b468486328f14698490ecea82e namespace=k8s.io Mar 7 01:51:53.014886 containerd[1460]: time="2026-03-07T01:51:53.013195082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:51:53.189898 kubelet[2641]: E0307 01:51:53.187371 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:53.190615 containerd[1460]: time="2026-03-07T01:51:53.189218966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2npv,Uid:9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd,Namespace:kube-system,Attempt:0,}" Mar 7 01:51:53.223360 kubelet[2641]: E0307 01:51:53.223286 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:53.225330 containerd[1460]: time="2026-03-07T01:51:53.223906423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8hww,Uid:57aa6722-9d0c-497d-aa4c-01ee72e33936,Namespace:kube-system,Attempt:0,}" Mar 7 01:51:53.457405 containerd[1460]: time="2026-03-07T01:51:53.454572257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8hww,Uid:57aa6722-9d0c-497d-aa4c-01ee72e33936,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:51:53.464618 containerd[1460]: time="2026-03-07T01:51:53.464454093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2npv,Uid:9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:51:53.467303 kubelet[2641]: E0307 01:51:53.465421 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:51:53.467303 kubelet[2641]: E0307 01:51:53.465480 2641 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:51:53.467303 kubelet[2641]: E0307 01:51:53.465929 2641 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-p2npv" Mar 7 01:51:53.467303 kubelet[2641]: E0307 01:51:53.466211 2641 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-p2npv" Mar 7 01:51:53.467596 kubelet[2641]: E0307 01:51:53.466310 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p2npv_kube-system(9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p2npv_kube-system(9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-p2npv" podUID="9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd" Mar 7 01:51:53.467596 kubelet[2641]: E0307 01:51:53.466102 2641 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-j8hww" Mar 7 01:51:53.467596 kubelet[2641]: E0307 01:51:53.466706 2641 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-j8hww" Mar 7 01:51:53.467847 kubelet[2641]: E0307 01:51:53.466761 2641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j8hww_kube-system(57aa6722-9d0c-497d-aa4c-01ee72e33936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j8hww_kube-system(57aa6722-9d0c-497d-aa4c-01ee72e33936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-j8hww" podUID="57aa6722-9d0c-497d-aa4c-01ee72e33936" Mar 7 01:51:53.880780 kubelet[2641]: E0307 01:51:53.880507 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:53.914613 containerd[1460]: time="2026-03-07T01:51:53.914504061Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 7 01:51:54.045641 containerd[1460]: time="2026-03-07T01:51:54.044763175Z" level=info msg="CreateContainer within sandbox \"dac386e377643bd611696c45b844f5edd72d74041a67ec5f6a329444185783d8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4dc0bbf67c9ea7858a0229f575daa17e141f8d7f0be83ebd4925ca2ab7f75af3\"" Mar 7 01:51:54.048379 containerd[1460]: time="2026-03-07T01:51:54.046307012Z" level=info msg="StartContainer for \"4dc0bbf67c9ea7858a0229f575daa17e141f8d7f0be83ebd4925ca2ab7f75af3\"" Mar 7 01:51:54.176696 systemd[1]: run-netns-cni\x2dfe028087\x2d95c9\x2d4a8d\x2deb2d\x2d55ff3c926c9e.mount: Deactivated successfully. Mar 7 01:51:54.177124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-537c6a295dfe50ac203e032eaf4c3fc2c866f4b7efe8e1d65baaa855ce819bba-shm.mount: Deactivated successfully. Mar 7 01:51:54.177244 systemd[1]: run-netns-cni\x2dce9c0bc1\x2d28ae\x2d0840\x2d3e79\x2db3391c391369.mount: Deactivated successfully. Mar 7 01:51:54.177441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-941362affd8659c85b1d8c0c2f2f6f82512069a94497ff7f81b0f030cc091d84-shm.mount: Deactivated successfully. Mar 7 01:51:54.331793 systemd[1]: Started cri-containerd-4dc0bbf67c9ea7858a0229f575daa17e141f8d7f0be83ebd4925ca2ab7f75af3.scope - libcontainer container 4dc0bbf67c9ea7858a0229f575daa17e141f8d7f0be83ebd4925ca2ab7f75af3. Mar 7 01:51:54.481277 containerd[1460]: time="2026-03-07T01:51:54.480901516Z" level=info msg="StartContainer for \"4dc0bbf67c9ea7858a0229f575daa17e141f8d7f0be83ebd4925ca2ab7f75af3\" returns successfully" Mar 7 01:51:54.944417 kubelet[2641]: E0307 01:51:54.939455 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:55.006571 kubelet[2641]: I0307 01:51:55.005673 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-z552v" podStartSLOduration=5.095495909 podStartE2EDuration="23.005540378s" podCreationTimestamp="2026-03-07 01:51:32 +0000 UTC" firstStartedPulling="2026-03-07 01:51:34.144377353 +0000 UTC m=+15.211986608" lastFinishedPulling="2026-03-07 01:51:52.054421822 +0000 UTC m=+33.122031077" observedRunningTime="2026-03-07 01:51:55.004781803 +0000 UTC m=+36.072391059" watchObservedRunningTime="2026-03-07 01:51:55.005540378 +0000 UTC m=+36.073149643" Mar 7 01:51:55.989832 kubelet[2641]: E0307 01:51:55.985906 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:56.170535 systemd-networkd[1384]: flannel.1: Link UP Mar 7 01:51:56.170549 systemd-networkd[1384]: flannel.1: Gained carrier Mar 7 01:51:57.212955 systemd-networkd[1384]: flannel.1: Gained IPv6LL Mar 7 01:52:05.773252 kubelet[2641]: E0307 01:52:05.772248 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:05.781830 containerd[1460]: time="2026-03-07T01:52:05.780933205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2npv,Uid:9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd,Namespace:kube-system,Attempt:0,}" Mar 7 01:52:06.025940 systemd-networkd[1384]: cni0: Link UP Mar 7 01:52:06.025955 systemd-networkd[1384]: cni0: Gained carrier Mar 7 01:52:06.087548 systemd-networkd[1384]: cni0: Lost carrier Mar 7 01:52:06.156593 systemd-networkd[1384]: vethf4922476: Link UP Mar 7 01:52:06.188351 kernel: cni0: port 1(vethf4922476) entered blocking state Mar 7 01:52:06.188463 kernel: cni0: port 1(vethf4922476) entered disabled state Mar 7 01:52:06.188495 kernel: vethf4922476: entered allmulticast mode Mar 7 01:52:06.201235 kernel: vethf4922476: entered promiscuous mode Mar 7 01:52:06.226789 kernel: cni0: port 1(vethf4922476) entered blocking state Mar 7 01:52:06.228432 kernel: cni0: port 1(vethf4922476) entered forwarding state Mar 7 01:52:06.239903 kernel: cni0: port 1(vethf4922476) entered disabled state Mar 7 01:52:06.270907 kernel: cni0: port 1(vethf4922476) entered blocking state Mar 7 01:52:06.271306 kernel: cni0: port 1(vethf4922476) entered forwarding state Mar 7 01:52:06.273558 systemd-networkd[1384]: vethf4922476: Gained carrier Mar 7 01:52:06.284308 systemd-networkd[1384]: cni0: Gained carrier Mar 7 01:52:06.306310 containerd[1460]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 7 01:52:06.306310 containerd[1460]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:52:06.616315 containerd[1460]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:52:06.615316391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:52:06.616315 containerd[1460]: time="2026-03-07T01:52:06.615470270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:52:06.616315 containerd[1460]: time="2026-03-07T01:52:06.615576118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:52:06.616315 containerd[1460]: time="2026-03-07T01:52:06.615729185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:52:06.828298 systemd[1]: run-containerd-runc-k8s.io-918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1-runc.hBZOZX.mount: Deactivated successfully. Mar 7 01:52:06.877274 systemd[1]: Started cri-containerd-918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1.scope - libcontainer container 918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1. Mar 7 01:52:06.983390 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:52:07.120497 containerd[1460]: time="2026-03-07T01:52:07.119967630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2npv,Uid:9d56443a-7f68-4fb2-b37b-73b9d6bcd8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1\"" Mar 7 01:52:07.126489 kubelet[2641]: E0307 01:52:07.124391 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:07.199359 containerd[1460]: time="2026-03-07T01:52:07.194729612Z" level=info msg="CreateContainer within sandbox \"918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:52:07.320226 systemd-networkd[1384]: cni0: Gained IPv6LL Mar 7 01:52:07.344444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674596269.mount: Deactivated successfully. Mar 7 01:52:07.381241 containerd[1460]: time="2026-03-07T01:52:07.380773768Z" level=info msg="CreateContainer within sandbox \"918197ab86b2e0ce8ef63c137cbdd122c3eaba5c24a15ef0dddeddd96a2a46c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cb5923edda9e696f0ab25840b0f93c63b77fdb3b955649d67f4b2bab9025f06\"" Mar 7 01:52:07.388362 containerd[1460]: time="2026-03-07T01:52:07.386772525Z" level=info msg="StartContainer for \"9cb5923edda9e696f0ab25840b0f93c63b77fdb3b955649d67f4b2bab9025f06\"" Mar 7 01:52:07.516939 systemd-networkd[1384]: vethf4922476: Gained IPv6LL Mar 7 01:52:07.551940 systemd[1]: Started cri-containerd-9cb5923edda9e696f0ab25840b0f93c63b77fdb3b955649d67f4b2bab9025f06.scope - libcontainer container 9cb5923edda9e696f0ab25840b0f93c63b77fdb3b955649d67f4b2bab9025f06. Mar 7 01:52:07.830720 containerd[1460]: time="2026-03-07T01:52:07.828717530Z" level=info msg="StartContainer for \"9cb5923edda9e696f0ab25840b0f93c63b77fdb3b955649d67f4b2bab9025f06\" returns successfully" Mar 7 01:52:08.236201 kubelet[2641]: E0307 01:52:08.234550 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:08.480440 kubelet[2641]: I0307 01:52:08.479516 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p2npv" podStartSLOduration=47.479489927 podStartE2EDuration="47.479489927s" podCreationTimestamp="2026-03-07 01:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:52:08.296821839 +0000 UTC m=+49.364431114" watchObservedRunningTime="2026-03-07 01:52:08.479489927 +0000 UTC m=+49.547099212" Mar 7 01:52:08.757117 kubelet[2641]: E0307 01:52:08.753357 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:08.771928 containerd[1460]: time="2026-03-07T01:52:08.769675670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8hww,Uid:57aa6722-9d0c-497d-aa4c-01ee72e33936,Namespace:kube-system,Attempt:0,}" Mar 7 01:52:09.320512 kubelet[2641]: E0307 01:52:09.319609 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:09.327731 systemd-networkd[1384]: veth0e1ec881: Link UP Mar 7 01:52:09.379477 kernel: cni0: port 2(veth0e1ec881) entered blocking state Mar 7 01:52:09.379610 kernel: cni0: port 2(veth0e1ec881) entered disabled state Mar 7 01:52:09.379643 kernel: veth0e1ec881: entered allmulticast mode Mar 7 01:52:09.407686 kernel: veth0e1ec881: entered promiscuous mode Mar 7 01:52:09.519384 kernel: cni0: port 2(veth0e1ec881) entered blocking state Mar 7 01:52:09.519490 kernel: cni0: port 2(veth0e1ec881) entered forwarding state Mar 7 01:52:09.514264 systemd-networkd[1384]: veth0e1ec881: Gained carrier Mar 7 01:52:09.548429 containerd[1460]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 7 01:52:09.548429 containerd[1460]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:52:09.824492 containerd[1460]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:52:09.822342319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:52:09.824492 containerd[1460]: time="2026-03-07T01:52:09.822454118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:52:09.824492 containerd[1460]: time="2026-03-07T01:52:09.822490386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:52:09.824492 containerd[1460]: time="2026-03-07T01:52:09.822743992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:52:10.048656 systemd[1]: Started cri-containerd-e876ff857ad73f1a72edad97509d30c54f737677dc532be9dcdca67374a99046.scope - libcontainer container e876ff857ad73f1a72edad97509d30c54f737677dc532be9dcdca67374a99046. Mar 7 01:52:10.240413 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:52:10.338792 kubelet[2641]: E0307 01:52:10.332738 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:10.699539 containerd[1460]: time="2026-03-07T01:52:10.692970854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8hww,Uid:57aa6722-9d0c-497d-aa4c-01ee72e33936,Namespace:kube-system,Attempt:0,} returns sandbox id \"e876ff857ad73f1a72edad97509d30c54f737677dc532be9dcdca67374a99046\"" Mar 7 01:52:10.723089 kubelet[2641]: E0307 01:52:10.717273 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:10.847718 containerd[1460]: time="2026-03-07T01:52:10.844336740Z" level=info msg="CreateContainer within sandbox \"e876ff857ad73f1a72edad97509d30c54f737677dc532be9dcdca67374a99046\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:52:11.041388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432998864.mount: Deactivated successfully. Mar 7 01:52:11.057803 containerd[1460]: time="2026-03-07T01:52:11.056877866Z" level=info msg="CreateContainer within sandbox \"e876ff857ad73f1a72edad97509d30c54f737677dc532be9dcdca67374a99046\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d33a449df191ee7f8fba45387ca5e1d2f2b1e4034fd75c5bd9b976c40e66f8ce\"" Mar 7 01:52:11.066402 containerd[1460]: time="2026-03-07T01:52:11.064799511Z" level=info msg="StartContainer for \"d33a449df191ee7f8fba45387ca5e1d2f2b1e4034fd75c5bd9b976c40e66f8ce\"" Mar 7 01:52:11.427470 systemd[1]: Started cri-containerd-d33a449df191ee7f8fba45387ca5e1d2f2b1e4034fd75c5bd9b976c40e66f8ce.scope - libcontainer container d33a449df191ee7f8fba45387ca5e1d2f2b1e4034fd75c5bd9b976c40e66f8ce. Mar 7 01:52:11.551717 systemd-networkd[1384]: veth0e1ec881: Gained IPv6LL Mar 7 01:52:12.105252 containerd[1460]: time="2026-03-07T01:52:12.103549050Z" level=info msg="StartContainer for \"d33a449df191ee7f8fba45387ca5e1d2f2b1e4034fd75c5bd9b976c40e66f8ce\" returns successfully" Mar 7 01:52:12.447792 kubelet[2641]: E0307 01:52:12.439521 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:13.510327 kubelet[2641]: E0307 01:52:13.509553 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:13.687928 kubelet[2641]: I0307 01:52:13.680629 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j8hww" podStartSLOduration=52.680609857 podStartE2EDuration="52.680609857s" podCreationTimestamp="2026-03-07 01:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:52:12.643715552 +0000 UTC m=+53.711324807" watchObservedRunningTime="2026-03-07 01:52:13.680609857 +0000 UTC m=+54.748219122" Mar 7 01:52:14.508292 kubelet[2641]: E0307 01:52:14.508187 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:15.526634 kubelet[2641]: E0307 01:52:15.525553 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:32.751760 kubelet[2641]: E0307 01:52:32.750811 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:44.754305 kubelet[2641]: E0307 01:52:44.752381 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:56.740586 kubelet[2641]: E0307 01:52:56.739344 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:06.758550 kubelet[2641]: E0307 01:53:06.758243 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:11.748606 kubelet[2641]: E0307 01:53:11.746039 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:19.750881 kubelet[2641]: E0307 01:53:19.749197 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:25.775560 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Mar 7 01:53:25.942339 sshd[3892]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:25.950734 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:25.966688 systemd-logind[1442]: New session 6 of user core. Mar 7 01:53:25.978371 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:53:26.554190 sshd[3892]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:26.561689 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:59508.service: Deactivated successfully. Mar 7 01:53:26.582446 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:53:26.591208 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:53:26.628566 systemd-logind[1442]: Removed session 6. Mar 7 01:53:26.747471 kubelet[2641]: E0307 01:53:26.747419 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:31.663912 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:44404.service - OpenSSH per-connection server daemon (10.0.0.1:44404). Mar 7 01:53:31.786308 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:31.809073 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:31.863568 systemd-logind[1442]: New session 7 of user core. Mar 7 01:53:31.878109 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:53:32.445960 sshd[3934]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:32.457313 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:44404.service: Deactivated successfully. Mar 7 01:53:32.468979 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:53:32.476516 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:53:32.485523 systemd-logind[1442]: Removed session 7. Mar 7 01:53:37.496823 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:44408.service - OpenSSH per-connection server daemon (10.0.0.1:44408). Mar 7 01:53:37.690346 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 44408 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:37.693533 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:37.724134 systemd-logind[1442]: New session 8 of user core. Mar 7 01:53:37.750796 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:53:38.134862 sshd[3971]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:38.146646 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:53:38.153363 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:44408.service: Deactivated successfully. Mar 7 01:53:38.159612 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:53:38.162732 systemd-logind[1442]: Removed session 8. Mar 7 01:53:42.743642 kubelet[2641]: E0307 01:53:42.743589 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:43.245478 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:34590.service - OpenSSH per-connection server daemon (10.0.0.1:34590). Mar 7 01:53:43.469456 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:43.502498 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:43.576954 systemd-logind[1442]: New session 9 of user core. Mar 7 01:53:43.610726 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:53:44.225131 sshd[4007]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:44.234368 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:34590.service: Deactivated successfully. Mar 7 01:53:44.240388 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:53:44.247501 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:53:44.252514 systemd-logind[1442]: Removed session 9. Mar 7 01:53:49.291493 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:34602.service - OpenSSH per-connection server daemon (10.0.0.1:34602). Mar 7 01:53:49.493627 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 34602 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:49.533899 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:49.588561 systemd-logind[1442]: New session 10 of user core. Mar 7 01:53:49.633141 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:53:50.365751 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:50.389920 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:34602.service: Deactivated successfully. Mar 7 01:53:50.414245 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:53:50.423303 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:53:50.432386 systemd-logind[1442]: Removed session 10. Mar 7 01:53:50.737927 kubelet[2641]: E0307 01:53:50.737597 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:55.558861 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:43842.service - OpenSSH per-connection server daemon (10.0.0.1:43842). Mar 7 01:53:55.937482 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 43842 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:55.948556 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:56.028931 systemd-logind[1442]: New session 11 of user core. Mar 7 01:53:56.058274 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:53:57.175906 sshd[4096]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:57.235282 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:43842.service: Deactivated successfully. Mar 7 01:53:57.280272 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:53:57.282189 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:53:57.332785 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:43856.service - OpenSSH per-connection server daemon (10.0.0.1:43856). Mar 7 01:53:57.348282 systemd-logind[1442]: Removed session 11. Mar 7 01:53:57.550765 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 43856 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:57.564744 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:57.628774 systemd-logind[1442]: New session 12 of user core. Mar 7 01:53:57.711984 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:53:59.341680 sshd[4111]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:59.401260 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:43856.service: Deactivated successfully. Mar 7 01:53:59.406789 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:53:59.410666 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:53:59.429323 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:43860.service - OpenSSH per-connection server daemon (10.0.0.1:43860). Mar 7 01:53:59.441755 systemd-logind[1442]: Removed session 12. Mar 7 01:53:59.718179 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 43860 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:53:59.721533 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:59.782225 systemd-logind[1442]: New session 13 of user core. Mar 7 01:53:59.815859 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:54:00.512965 sshd[4144]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:00.542524 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:43860.service: Deactivated successfully. Mar 7 01:54:00.551333 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:54:00.566691 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:54:00.575092 systemd-logind[1442]: Removed session 13. Mar 7 01:54:05.609237 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:50194.service - OpenSSH per-connection server daemon (10.0.0.1:50194). Mar 7 01:54:05.851808 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 50194 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:05.866964 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:05.929318 systemd-logind[1442]: New session 14 of user core. Mar 7 01:54:05.954135 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:54:06.692746 sshd[4180]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:06.730272 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:50194.service: Deactivated successfully. Mar 7 01:54:06.815717 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:54:06.821332 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:54:06.861612 systemd-logind[1442]: Removed session 14. Mar 7 01:54:11.803332 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:60760.service - OpenSSH per-connection server daemon (10.0.0.1:60760). Mar 7 01:54:12.303255 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 60760 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:12.336834 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:12.372429 systemd-logind[1442]: New session 15 of user core. Mar 7 01:54:12.379720 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:54:13.150654 sshd[4214]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:13.163851 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:60760.service: Deactivated successfully. Mar 7 01:54:13.176981 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:54:13.200438 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:54:13.217741 systemd-logind[1442]: Removed session 15. Mar 7 01:54:15.738759 kubelet[2641]: E0307 01:54:15.738520 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:18.451371 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:60764.service - OpenSSH per-connection server daemon (10.0.0.1:60764). Mar 7 01:54:18.762641 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 60764 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:18.778388 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:18.828317 systemd-logind[1442]: New session 16 of user core. Mar 7 01:54:18.867473 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:54:19.474400 sshd[4248]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:19.483964 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:60764.service: Deactivated successfully. Mar 7 01:54:19.518388 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:54:19.537896 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:54:19.546908 systemd-logind[1442]: Removed session 16. Mar 7 01:54:24.582554 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:46598.service - OpenSSH per-connection server daemon (10.0.0.1:46598). Mar 7 01:54:24.851787 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:24.852145 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:24.995647 systemd-logind[1442]: New session 17 of user core. Mar 7 01:54:25.013854 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:54:25.482243 sshd[4285]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:25.514593 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:46598.service: Deactivated successfully. Mar 7 01:54:25.538526 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:54:25.559249 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:54:25.575904 systemd-logind[1442]: Removed session 17. Mar 7 01:54:26.743327 kubelet[2641]: E0307 01:54:26.740617 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.550711 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). Mar 7 01:54:30.739277 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:30.750864 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:30.791912 systemd-logind[1442]: New session 18 of user core. Mar 7 01:54:30.825363 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:54:31.405351 sshd[4338]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:31.419227 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:33912.service: Deactivated successfully. Mar 7 01:54:31.433920 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:54:31.446956 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:54:31.449626 systemd-logind[1442]: Removed session 18. Mar 7 01:54:33.749402 kubelet[2641]: E0307 01:54:33.749259 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:36.452944 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944). Mar 7 01:54:36.546120 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:36.556242 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:36.589237 systemd-logind[1442]: New session 19 of user core. Mar 7 01:54:36.625122 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:54:36.899328 sshd[4377]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:36.917510 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:33944.service: Deactivated successfully. Mar 7 01:54:36.922243 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:54:36.925620 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:54:36.941808 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:33952.service - OpenSSH per-connection server daemon (10.0.0.1:33952). Mar 7 01:54:36.947154 systemd-logind[1442]: Removed session 19. Mar 7 01:54:37.011950 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 33952 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:37.017702 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:37.037144 systemd-logind[1442]: New session 20 of user core. Mar 7 01:54:37.046312 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:54:37.682362 sshd[4391]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:37.715841 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:33952.service: Deactivated successfully. Mar 7 01:54:37.732737 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:54:37.746496 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:54:37.774138 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:33956.service - OpenSSH per-connection server daemon (10.0.0.1:33956). Mar 7 01:54:37.779407 systemd-logind[1442]: Removed session 20. Mar 7 01:54:37.874902 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 33956 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:37.878984 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:37.909154 systemd-logind[1442]: New session 21 of user core. Mar 7 01:54:37.919126 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:54:39.186256 sshd[4405]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:39.215584 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:33956.service: Deactivated successfully. Mar 7 01:54:39.227781 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:54:39.236156 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:54:39.263967 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:33966.service - OpenSSH per-connection server daemon (10.0.0.1:33966). Mar 7 01:54:39.270227 systemd-logind[1442]: Removed session 21. Mar 7 01:54:39.450620 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 33966 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:39.468361 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:39.500213 systemd-logind[1442]: New session 22 of user core. Mar 7 01:54:39.511403 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:54:40.321841 sshd[4441]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:40.349063 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:33966.service: Deactivated successfully. Mar 7 01:54:40.357318 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:54:40.359747 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:54:40.387552 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:38370.service - OpenSSH per-connection server daemon (10.0.0.1:38370). Mar 7 01:54:40.399751 systemd-logind[1442]: Removed session 22. Mar 7 01:54:40.529677 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 38370 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:40.534468 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:40.622268 systemd-logind[1442]: New session 23 of user core. Mar 7 01:54:40.632736 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:54:41.109599 sshd[4459]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:41.152152 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:38370.service: Deactivated successfully. Mar 7 01:54:41.167081 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:54:41.173717 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:54:41.184888 systemd-logind[1442]: Removed session 23. Mar 7 01:54:43.739796 kubelet[2641]: E0307 01:54:43.737542 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:46.168514 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:38406.service - OpenSSH per-connection server daemon (10.0.0.1:38406). Mar 7 01:54:46.335912 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 38406 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:46.350395 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:46.399154 systemd-logind[1442]: New session 24 of user core. Mar 7 01:54:46.438902 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:54:46.740420 kubelet[2641]: E0307 01:54:46.739724 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:47.012414 sshd[4507]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:47.038830 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:38406.service: Deactivated successfully. Mar 7 01:54:47.054298 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:54:47.064362 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:54:47.077768 systemd-logind[1442]: Removed session 24. Mar 7 01:54:52.059802 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:56810.service - OpenSSH per-connection server daemon (10.0.0.1:56810). Mar 7 01:54:52.189410 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 56810 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:52.210552 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:52.260338 systemd-logind[1442]: New session 25 of user core. Mar 7 01:54:52.281455 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:54:52.725687 sshd[4541]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:52.764130 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:54:52.773903 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:56810.service: Deactivated successfully. Mar 7 01:54:52.784757 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:54:52.799431 systemd-logind[1442]: Removed session 25. Mar 7 01:54:56.781573 kubelet[2641]: E0307 01:54:56.779421 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:57.758941 systemd[1]: Started sshd@25-10.0.0.110:22-10.0.0.1:56820.service - OpenSSH per-connection server daemon (10.0.0.1:56820). Mar 7 01:54:57.964585 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 56820 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:54:57.970550 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:58.011629 systemd-logind[1442]: New session 26 of user core. Mar 7 01:54:58.040357 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:54:58.481544 sshd[4577]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:58.512971 systemd[1]: sshd@25-10.0.0.110:22-10.0.0.1:56820.service: Deactivated successfully. Mar 7 01:54:58.543470 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:54:58.555497 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:54:58.566754 systemd-logind[1442]: Removed session 26. Mar 7 01:55:03.622720 systemd[1]: Started sshd@26-10.0.0.110:22-10.0.0.1:37768.service - OpenSSH per-connection server daemon (10.0.0.1:37768). Mar 7 01:55:03.827575 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 37768 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:03.836801 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:03.864858 systemd-logind[1442]: New session 27 of user core. Mar 7 01:55:03.874818 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:55:04.376129 sshd[4614]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:04.405237 systemd[1]: sshd@26-10.0.0.110:22-10.0.0.1:37768.service: Deactivated successfully. Mar 7 01:55:04.421844 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:55:04.433936 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:55:04.441817 systemd-logind[1442]: Removed session 27. Mar 7 01:55:09.471354 systemd[1]: Started sshd@27-10.0.0.110:22-10.0.0.1:37780.service - OpenSSH per-connection server daemon (10.0.0.1:37780). Mar 7 01:55:09.623426 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 37780 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:09.627690 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:09.662795 systemd-logind[1442]: New session 28 of user core. Mar 7 01:55:09.693328 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:55:10.436102 sshd[4649]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:10.466976 systemd[1]: sshd@27-10.0.0.110:22-10.0.0.1:37780.service: Deactivated successfully. Mar 7 01:55:10.485470 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:55:10.493518 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:55:10.502088 systemd-logind[1442]: Removed session 28. Mar 7 01:55:11.743362 kubelet[2641]: E0307 01:55:11.741956 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"