Mar 6 01:39:04.238020 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:39:04.238042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:39:04.238054 kernel: BIOS-provided physical RAM map: Mar 6 01:39:04.238060 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 01:39:04.238065 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 01:39:04.238071 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 01:39:04.238078 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 01:39:04.238084 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 01:39:04.238089 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:39:04.238098 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 01:39:04.238104 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 01:39:04.238109 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 01:39:04.238139 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 01:39:04.238146 kernel: NX (Execute Disable) protection: active Mar 6 01:39:04.238153 kernel: APIC: Static calls initialized Mar 6 01:39:04.238180 kernel: SMBIOS 2.8 present. Mar 6 01:39:04.238187 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 01:39:04.238193 kernel: Hypervisor detected: KVM Mar 6 01:39:04.238199 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:39:04.238205 kernel: kvm-clock: using sched offset of 13202230882 cycles Mar 6 01:39:04.238212 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:39:04.238218 kernel: tsc: Detected 2445.424 MHz processor Mar 6 01:39:04.238224 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:39:04.238231 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:39:04.238241 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 01:39:04.238247 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 01:39:04.238253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:39:04.238259 kernel: Using GB pages for direct mapping Mar 6 01:39:04.238265 kernel: ACPI: Early table checksum verification disabled Mar 6 01:39:04.238272 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 01:39:04.238278 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238284 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238290 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238299 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 01:39:04.238306 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238312 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238318 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238324 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:39:04.238330 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 01:39:04.238337 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 01:39:04.238347 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 01:39:04.238356 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 01:39:04.238363 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 01:39:04.238369 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 01:39:04.238376 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 01:39:04.238382 kernel: No NUMA configuration found Mar 6 01:39:04.238389 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 01:39:04.238398 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 6 01:39:04.238404 kernel: Zone ranges: Mar 6 01:39:04.238411 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:39:04.238417 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 01:39:04.238424 kernel: Normal empty Mar 6 01:39:04.238430 kernel: Movable zone start for each node Mar 6 01:39:04.238436 kernel: Early memory node ranges Mar 6 01:39:04.238443 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 01:39:04.238479 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 01:39:04.238487 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 01:39:04.238497 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:39:04.238520 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 01:39:04.238527 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 01:39:04.238533 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:39:04.238540 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:39:04.238546 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:39:04.238552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:39:04.238575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:39:04.238582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:39:04.238592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:39:04.238598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:39:04.238605 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:39:04.238611 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:39:04.238618 kernel: TSC deadline timer available Mar 6 01:39:04.238624 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:39:04.238630 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:39:04.238637 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:39:04.238659 kernel: kvm-guest: setup PV sched yield Mar 6 01:39:04.238669 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 01:39:04.238676 kernel: Booting paravirtualized kernel on KVM Mar 6 01:39:04.238682 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:39:04.238689 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:39:04.238695 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:39:04.238702 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:39:04.238708 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:39:04.238714 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:39:04.238721 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:39:04.238731 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:39:04.238738 kernel: random: crng init done Mar 6 01:39:04.238744 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:39:04.238751 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:39:04.238757 kernel: Fallback order for Node 0: 0 Mar 6 01:39:04.238764 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 6 01:39:04.238770 kernel: Policy zone: DMA32 Mar 6 01:39:04.238776 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:39:04.238900 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 6 01:39:04.238908 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:39:04.238914 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:39:04.238921 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:39:04.238927 kernel: Dynamic Preempt: voluntary Mar 6 01:39:04.238934 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:39:04.238941 kernel: rcu: RCU event tracing is enabled. Mar 6 01:39:04.238948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:39:04.238955 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:39:04.238966 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:39:04.238973 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:39:04.238979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:39:04.238986 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:39:04.239010 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:39:04.239017 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:39:04.239023 kernel: Console: colour VGA+ 80x25 Mar 6 01:39:04.239030 kernel: printk: console [ttyS0] enabled Mar 6 01:39:04.239036 kernel: ACPI: Core revision 20230628 Mar 6 01:39:04.239046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:39:04.239053 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:39:04.239059 kernel: x2apic enabled Mar 6 01:39:04.239066 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:39:04.239072 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:39:04.239079 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:39:04.239085 kernel: kvm-guest: setup PV IPIs Mar 6 01:39:04.239092 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:39:04.239128 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:39:04.239135 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 6 01:39:04.239141 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:39:04.239148 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:39:04.239158 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:39:04.239165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:39:04.239172 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:39:04.239179 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:39:04.239185 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:39:04.239195 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:39:04.239218 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:39:04.239226 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:39:04.239233 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:39:04.239239 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:39:04.239246 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:39:04.239253 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:39:04.239260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:39:04.239270 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:39:04.239277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:39:04.239284 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:39:04.239291 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:39:04.239298 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:39:04.239304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:39:04.239311 kernel: landlock: Up and running. Mar 6 01:39:04.239318 kernel: SELinux: Initializing. Mar 6 01:39:04.239325 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:39:04.239334 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:39:04.239341 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:39:04.239348 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:39:04.239355 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:39:04.239362 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:39:04.239369 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:39:04.239376 kernel: signal: max sigframe size: 1776 Mar 6 01:39:04.239397 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:39:04.239404 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:39:04.239415 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:39:04.239421 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:39:04.239428 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:39:04.239435 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:39:04.239442 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:39:04.239448 kernel: smpboot: Max logical packages: 1 Mar 6 01:39:04.239479 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 6 01:39:04.239486 kernel: devtmpfs: initialized Mar 6 01:39:04.239493 kernel: x86/mm: Memory block size: 128MB Mar 6 01:39:04.239503 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:39:04.239510 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:39:04.239516 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:39:04.239523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:39:04.239530 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:39:04.239537 kernel: audit: type=2000 audit(1772761141.894:1): state=initialized audit_enabled=0 res=1 Mar 6 01:39:04.239543 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:39:04.239550 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:39:04.239557 kernel: cpuidle: using governor menu Mar 6 01:39:04.239567 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:39:04.239573 kernel: dca service started, version 1.12.1 Mar 6 01:39:04.239580 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:39:04.239587 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:39:04.239594 kernel: PCI: Using configuration type 1 for base access Mar 6 01:39:04.239601 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:39:04.239607 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:39:04.239614 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:39:04.239621 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:39:04.239631 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:39:04.239637 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:39:04.239644 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:39:04.239651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:39:04.239658 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:39:04.239664 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:39:04.239671 kernel: ACPI: Interpreter enabled Mar 6 01:39:04.239678 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:39:04.239684 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:39:04.239694 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:39:04.239701 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:39:04.239708 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:39:04.239714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:39:04.240144 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:39:04.240311 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:39:04.240504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:39:04.240521 kernel: PCI host bridge to bus 0000:00 Mar 6 01:39:04.240740 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:39:04.240944 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:39:04.241083 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:39:04.241217 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:39:04.241350 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:39:04.241530 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 01:39:04.241677 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:39:04.241982 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:39:04.242186 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:39:04.242336 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 6 01:39:04.242529 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 6 01:39:04.242680 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 6 01:39:04.242875 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:39:04.243094 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:39:04.243247 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 6 01:39:04.243441 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 6 01:39:04.243749 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 01:39:04.244016 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:39:04.244170 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 6 01:39:04.244324 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 6 01:39:04.244513 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 01:39:04.244746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:39:04.244990 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 6 01:39:04.245148 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 6 01:39:04.245295 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 01:39:04.245440 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 6 01:39:04.245702 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:39:04.245903 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:39:04.246123 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:39:04.246275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 6 01:39:04.246421 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 6 01:39:04.246634 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:39:04.246837 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 6 01:39:04.246855 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:39:04.246863 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:39:04.246870 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:39:04.246877 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:39:04.246883 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:39:04.246890 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:39:04.246897 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:39:04.246903 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:39:04.246910 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:39:04.246920 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:39:04.246927 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:39:04.246934 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:39:04.246941 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:39:04.246948 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:39:04.246954 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:39:04.246961 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:39:04.246968 kernel: iommu: Default domain type: Translated Mar 6 01:39:04.246975 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:39:04.246985 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:39:04.246991 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:39:04.246998 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 01:39:04.247005 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 01:39:04.247159 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:39:04.247305 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:39:04.247481 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:39:04.247493 kernel: vgaarb: loaded Mar 6 01:39:04.247505 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:39:04.247512 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:39:04.247518 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:39:04.247525 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:39:04.247532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:39:04.247539 kernel: pnp: PnP ACPI init Mar 6 01:39:04.247992 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:39:04.248011 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:39:04.248024 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:39:04.248031 kernel: NET: Registered PF_INET protocol family Mar 6 01:39:04.248038 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:39:04.248045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:39:04.248052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:39:04.248060 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:39:04.248066 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:39:04.248073 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:39:04.248080 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:39:04.248090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:39:04.248097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:39:04.248104 kernel: NET: Registered PF_XDP protocol family Mar 6 01:39:04.248284 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:39:04.248441 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:39:04.248638 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:39:04.248844 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:39:04.249060 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:39:04.249205 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 01:39:04.249214 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:39:04.249222 kernel: Initialise system trusted keyrings Mar 6 01:39:04.249228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:39:04.249235 kernel: Key type asymmetric registered Mar 6 01:39:04.249242 kernel: Asymmetric key parser 'x509' registered Mar 6 01:39:04.249249 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:39:04.249256 kernel: io scheduler mq-deadline registered Mar 6 01:39:04.249263 kernel: io scheduler kyber registered Mar 6 01:39:04.249274 kernel: io scheduler bfq registered Mar 6 01:39:04.249281 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:39:04.249288 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:39:04.249295 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:39:04.249303 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:39:04.249309 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:39:04.249316 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:39:04.249323 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:39:04.249330 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:39:04.249340 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:39:04.249581 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:39:04.249594 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:39:04.249735 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:39:04.249969 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:39:03 UTC (1772761143) Mar 6 01:39:04.250116 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:39:04.250125 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:39:04.250133 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:39:04.250145 kernel: Segment Routing with IPv6 Mar 6 01:39:04.250152 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:39:04.250159 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:39:04.250166 kernel: Key type dns_resolver registered Mar 6 01:39:04.250173 kernel: IPI shorthand broadcast: enabled Mar 6 01:39:04.250180 kernel: sched_clock: Marking stable (2612026856, 521458104)->(3430077743, -296592783) Mar 6 01:39:04.250187 kernel: registered taskstats version 1 Mar 6 01:39:04.250193 kernel: Loading compiled-in X.509 certificates Mar 6 01:39:04.250201 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:39:04.250210 kernel: Key type .fscrypt registered Mar 6 01:39:04.250217 kernel: Key type fscrypt-provisioning registered Mar 6 01:39:04.250224 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:39:04.250231 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:39:04.250238 kernel: ima: No architecture policies found Mar 6 01:39:04.250244 kernel: clk: Disabling unused clocks Mar 6 01:39:04.250251 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:39:04.250258 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:39:04.250265 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:39:04.250275 kernel: Run /init as init process Mar 6 01:39:04.250282 kernel: with arguments: Mar 6 01:39:04.250288 kernel: /init Mar 6 01:39:04.250295 kernel: with environment: Mar 6 01:39:04.250302 kernel: HOME=/ Mar 6 01:39:04.250309 kernel: TERM=linux Mar 6 01:39:04.250317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:39:04.250326 systemd[1]: Detected virtualization kvm. Mar 6 01:39:04.250337 systemd[1]: Detected architecture x86-64. Mar 6 01:39:04.250344 systemd[1]: Running in initrd. Mar 6 01:39:04.250351 systemd[1]: No hostname configured, using default hostname. Mar 6 01:39:04.250358 systemd[1]: Hostname set to . Mar 6 01:39:04.250366 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:39:04.250373 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:39:04.250380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:39:04.250388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:39:04.250398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:39:04.250406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:39:04.250413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:39:04.250420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:39:04.250429 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:39:04.250436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:39:04.250444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:39:04.250500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:39:04.250507 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:39:04.250515 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:39:04.250522 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:39:04.250545 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:39:04.250555 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:39:04.250566 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:39:04.250573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:39:04.250581 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:39:04.250588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:39:04.250596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:39:04.250603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:39:04.250611 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:39:04.250618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:39:04.250626 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:39:04.250636 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:39:04.250644 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:39:04.250651 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:39:04.250659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:39:04.250666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:39:04.250697 systemd-journald[195]: Collecting audit messages is disabled. Mar 6 01:39:04.250718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:39:04.250726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:39:04.250734 systemd-journald[195]: Journal started Mar 6 01:39:04.250752 systemd-journald[195]: Runtime Journal (/run/log/journal/e7c932bc5e4541fdaccc72e01a5884ca) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:39:04.258974 systemd-modules-load[196]: Inserted module 'overlay' Mar 6 01:39:04.393567 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:39:04.393598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:39:04.393612 kernel: Bridge firewalling registered Mar 6 01:39:04.265352 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:39:04.291596 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 6 01:39:04.405741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:39:04.413143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:39:04.437088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:39:04.438299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:39:04.439599 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:39:04.454902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:39:04.458420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:39:04.490159 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:39:04.499585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:39:04.519182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:39:04.523168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:39:04.529101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:39:04.538685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:39:04.550638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:39:04.560237 dracut-cmdline[225]: dracut-dracut-053 Mar 6 01:39:04.564568 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:39:04.584181 systemd-resolved[231]: Positive Trust Anchors: Mar 6 01:39:04.584213 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:39:04.584240 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:39:04.587050 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 6 01:39:04.588851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:39:04.594728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:39:04.671885 kernel: SCSI subsystem initialized Mar 6 01:39:04.682923 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:39:04.694859 kernel: iscsi: registered transport (tcp) Mar 6 01:39:04.722922 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:39:04.723021 kernel: QLogic iSCSI HBA Driver Mar 6 01:39:04.795041 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:39:04.814009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:39:04.856616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:39:04.856660 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:39:04.861889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:39:04.911870 kernel: raid6: avx2x4 gen() 32138 MB/s Mar 6 01:39:04.929871 kernel: raid6: avx2x2 gen() 29324 MB/s Mar 6 01:39:04.949517 kernel: raid6: avx2x1 gen() 24369 MB/s Mar 6 01:39:04.949584 kernel: raid6: using algorithm avx2x4 gen() 32138 MB/s Mar 6 01:39:04.969094 kernel: raid6: .... xor() 4786 MB/s, rmw enabled Mar 6 01:39:04.969162 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:39:04.990875 kernel: xor: automatically using best checksumming function avx Mar 6 01:39:05.163851 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:39:05.182413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:39:05.199217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:39:05.224707 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 6 01:39:05.234251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:39:05.254003 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:39:05.274588 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 6 01:39:05.332536 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:39:05.350036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:39:05.452029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:39:05.474098 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:39:05.495224 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:39:05.505545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:39:05.515356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:39:05.525100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:39:05.535880 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:39:05.542886 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:39:05.555951 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:39:05.559241 kernel: libata version 3.00 loaded. Mar 6 01:39:05.559147 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:39:05.577730 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:39:05.577750 kernel: GPT:9289727 != 19775487 Mar 6 01:39:05.577761 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:39:05.577778 kernel: GPT:9289727 != 19775487 Mar 6 01:39:05.577830 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:39:05.577842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:39:05.569374 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:39:05.569670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:39:05.591950 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:39:05.597275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:39:05.597424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:39:05.601696 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:39:05.615257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:39:05.644602 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (471) Mar 6 01:39:05.644628 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Mar 6 01:39:05.635682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:39:05.655447 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:39:05.668030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:39:05.703916 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:39:05.704237 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:39:05.704259 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:39:05.704275 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:39:05.704619 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:39:05.704969 kernel: scsi host0: ahci Mar 6 01:39:05.705241 kernel: scsi host1: ahci Mar 6 01:39:05.705539 kernel: scsi host2: ahci Mar 6 01:39:05.705864 kernel: scsi host3: ahci Mar 6 01:39:05.706200 kernel: scsi host4: ahci Mar 6 01:39:05.706447 kernel: scsi host5: ahci Mar 6 01:39:05.706760 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 6 01:39:05.706780 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 6 01:39:05.706889 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 6 01:39:05.706909 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 6 01:39:05.706926 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 6 01:39:05.680964 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:39:05.714437 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 6 01:39:05.714492 kernel: AES CTR mode by8 optimization enabled Mar 6 01:39:05.688668 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:39:05.733268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:39:05.751039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:39:05.895127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:39:05.913054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:39:05.901495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:39:05.918568 disk-uuid[554]: Primary Header is updated. Mar 6 01:39:05.918568 disk-uuid[554]: Secondary Entries is updated. Mar 6 01:39:05.918568 disk-uuid[554]: Secondary Header is updated. Mar 6 01:39:05.933344 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:39:05.940678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:39:06.037842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:39:06.037901 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:39:06.041831 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:39:06.045870 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:39:06.048853 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:39:06.051898 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:39:06.056721 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:39:06.056754 kernel: ata3.00: applying bridge limits Mar 6 01:39:06.059918 kernel: ata3.00: configured for UDMA/100 Mar 6 01:39:06.065004 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:39:06.123153 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:39:06.123587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:39:06.136902 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:39:06.924733 disk-uuid[556]: The operation has completed successfully. Mar 6 01:39:06.928739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:39:06.964258 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:39:06.964521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:39:06.991307 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:39:07.001139 sh[591]: Success Mar 6 01:39:07.023860 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:39:07.076590 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:39:07.095598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:39:07.099343 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:39:07.120051 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:39:07.120108 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:39:07.120129 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:39:07.125268 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:39:07.125289 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:39:07.135177 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:39:07.136097 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:39:07.154109 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:39:07.155274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:39:07.179827 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:39:07.183720 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:39:07.183752 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:39:07.188840 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:39:07.202142 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:39:07.206713 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:39:07.217288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:39:07.226027 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:39:07.288151 ignition[697]: Ignition 2.19.0 Mar 6 01:39:07.288170 ignition[697]: Stage: fetch-offline Mar 6 01:39:07.288223 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:07.288240 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:07.288372 ignition[697]: parsed url from cmdline: "" Mar 6 01:39:07.288379 ignition[697]: no config URL provided Mar 6 01:39:07.288388 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:39:07.288403 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:39:07.288440 ignition[697]: op(1): [started] loading QEMU firmware config module Mar 6 01:39:07.288448 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:39:07.297534 ignition[697]: op(1): [finished] loading QEMU firmware config module Mar 6 01:39:07.326863 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:39:07.338009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:39:07.371876 systemd-networkd[779]: lo: Link UP Mar 6 01:39:07.371918 systemd-networkd[779]: lo: Gained carrier Mar 6 01:39:07.381882 ignition[697]: parsing config with SHA512: 8ef2e27a173c028314aacd1d55eb378cac4239a4dffd9613b6337d6cd5d8e2de896509a826013c6ae074b123b2f01dddf016114aec8333ed7c98373d09bd48c5 Mar 6 01:39:07.374152 systemd-networkd[779]: Enumeration completed Mar 6 01:39:07.386202 ignition[697]: fetch-offline: fetch-offline passed Mar 6 01:39:07.374331 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:39:07.386272 ignition[697]: Ignition finished successfully Mar 6 01:39:07.375353 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:39:07.375359 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:39:07.377378 systemd-networkd[779]: eth0: Link UP Mar 6 01:39:07.377383 systemd-networkd[779]: eth0: Gained carrier Mar 6 01:39:07.377393 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:39:07.381656 systemd[1]: Reached target network.target - Network. Mar 6 01:39:07.385727 unknown[697]: fetched base config from "system" Mar 6 01:39:07.385736 unknown[697]: fetched user config from "qemu" Mar 6 01:39:07.389037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:39:07.395251 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:39:07.439889 ignition[782]: Ignition 2.19.0 Mar 6 01:39:07.414972 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:39:07.439903 ignition[782]: Stage: kargs Mar 6 01:39:07.415119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:39:07.440193 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:07.447261 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:39:07.440213 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:07.463995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:39:07.441651 ignition[782]: kargs: kargs passed Mar 6 01:39:07.441730 ignition[782]: Ignition finished successfully Mar 6 01:39:07.489257 ignition[791]: Ignition 2.19.0 Mar 6 01:39:07.489283 ignition[791]: Stage: disks Mar 6 01:39:07.489529 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:07.492991 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:39:07.489546 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:07.498051 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:39:07.490396 ignition[791]: disks: disks passed Mar 6 01:39:07.506378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:39:07.490493 ignition[791]: Ignition finished successfully Mar 6 01:39:07.511344 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:39:07.515606 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:39:07.555332 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:39:07.519720 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:39:07.533322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:39:07.556661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:39:07.581057 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:39:07.698854 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:39:07.699409 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:39:07.705119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:39:07.729136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:39:07.742200 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Mar 6 01:39:07.733851 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:39:07.760518 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:39:07.760565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:39:07.760577 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:39:07.760588 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:39:07.742595 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:39:07.742670 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:39:07.742710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:39:07.763188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:39:07.768586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:39:07.786985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:39:07.833357 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:39:07.839678 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:39:07.845009 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:39:07.851958 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:39:07.985208 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:39:08.005968 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:39:08.012028 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:39:08.023858 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:39:08.044674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:39:08.058645 ignition[923]: INFO : Ignition 2.19.0 Mar 6 01:39:08.058645 ignition[923]: INFO : Stage: mount Mar 6 01:39:08.062885 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:08.062885 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:08.069207 ignition[923]: INFO : mount: mount passed Mar 6 01:39:08.071241 ignition[923]: INFO : Ignition finished successfully Mar 6 01:39:08.075334 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:39:08.091963 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:39:08.116032 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:39:08.129012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:39:08.141855 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (935) Mar 6 01:39:08.141894 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:39:08.149068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:39:08.149118 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:39:08.158843 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:39:08.160976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:39:08.191555 ignition[952]: INFO : Ignition 2.19.0 Mar 6 01:39:08.191555 ignition[952]: INFO : Stage: files Mar 6 01:39:08.196665 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:08.196665 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:08.204918 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:39:08.208867 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:39:08.208867 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:39:08.220190 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:39:08.224267 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:39:08.228300 unknown[952]: wrote ssh authorized keys file for user: core Mar 6 01:39:08.231005 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:39:08.235707 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 6 01:39:08.240268 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 6 01:39:08.244689 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:39:08.251009 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:39:08.339171 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 01:39:08.425984 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:39:08.425984 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:39:08.441081 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:39:08.441081 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:39:08.456015 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:39:08.462433 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:39:08.468737 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:39:08.468737 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:39:08.479242 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 01:39:08.741996 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 01:39:09.302010 systemd-networkd[779]: eth0: Gained IPv6LL Mar 6 01:39:09.440606 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:39:09.440606 ignition[952]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 6 01:39:09.452139 ignition[952]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 6 01:39:09.460443 ignition[952]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 6 01:39:09.460443 ignition[952]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 6 01:39:09.460443 ignition[952]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 6 01:39:09.474260 ignition[952]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:39:09.474260 ignition[952]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:39:09.474260 ignition[952]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 6 01:39:09.474260 ignition[952]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 6 01:39:09.474260 ignition[952]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:39:09.496150 ignition[952]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:39:09.496150 ignition[952]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 6 01:39:09.504393 ignition[952]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:39:09.538386 ignition[952]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:39:09.548423 ignition[952]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:39:09.553029 ignition[952]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:39:09.553029 ignition[952]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:39:09.553029 ignition[952]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:39:09.553029 ignition[952]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:39:09.553029 ignition[952]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:39:09.553029 ignition[952]: INFO : files: files passed Mar 6 01:39:09.553029 ignition[952]: INFO : Ignition finished successfully Mar 6 01:39:09.580431 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:39:09.598090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:39:09.601952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:39:09.612712 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:39:09.612951 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:39:09.623027 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:39:09.626959 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:39:09.634712 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:39:09.634712 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:39:09.629043 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:39:09.634956 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:39:09.643135 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:39:09.684501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:39:09.684680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:39:09.690730 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:39:09.696770 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:39:09.699603 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:39:09.700877 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:39:09.729978 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:39:09.746060 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:39:09.757580 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:39:09.762080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:39:09.769135 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:39:09.776633 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:39:09.776877 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:39:09.784495 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:39:09.790079 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:39:09.795677 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:39:09.795901 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:39:09.796340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:39:09.796865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:39:09.797732 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:39:09.798693 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:39:09.799617 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:39:09.902001 ignition[1008]: INFO : Ignition 2.19.0 Mar 6 01:39:09.902001 ignition[1008]: INFO : Stage: umount Mar 6 01:39:09.902001 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:39:09.902001 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:39:09.902001 ignition[1008]: INFO : umount: umount passed Mar 6 01:39:09.902001 ignition[1008]: INFO : Ignition finished successfully Mar 6 01:39:09.800566 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:39:09.801397 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:39:09.801558 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:39:09.802320 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:39:09.802822 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:39:09.803226 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:39:09.803362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:39:09.803700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:39:09.803904 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:39:09.804614 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:39:09.804745 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:39:09.805615 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:39:09.806376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:39:09.806679 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:39:09.807451 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:39:09.808376 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:39:09.808887 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:39:09.808999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:39:09.809831 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:39:09.809953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:39:09.810200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:39:09.810327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:39:09.810651 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:39:09.810775 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:39:09.876177 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:39:09.879860 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:39:09.884863 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:39:09.885256 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:39:09.893235 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:39:09.893413 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:39:09.904098 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:39:09.904242 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:39:09.910201 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:39:09.910348 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:39:09.919975 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:39:09.921170 systemd[1]: Stopped target network.target - Network. Mar 6 01:39:09.926639 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:39:09.926740 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:39:09.934149 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:39:09.934232 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:39:09.940834 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:39:09.940934 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:39:09.945676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:39:09.945754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:39:09.951651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:39:09.957065 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:39:09.963873 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 6 01:39:09.966581 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:39:09.966771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:39:09.974023 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:39:09.974218 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:39:09.985009 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:39:09.985200 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:39:09.989966 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:39:09.990040 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:39:09.994874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:39:09.994943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:39:10.012130 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:39:10.238858 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 6 01:39:10.016951 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:39:10.017041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:39:10.023455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:39:10.023578 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:39:10.029111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:39:10.029178 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:39:10.036147 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:39:10.036233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:39:10.043483 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:39:10.066987 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:39:10.067163 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:39:10.087090 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:39:10.087367 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:39:10.092508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:39:10.092627 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:39:10.097191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:39:10.097268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:39:10.103885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:39:10.103976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:39:10.111181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:39:10.111282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:39:10.113134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:39:10.113286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:39:10.138160 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:39:10.145551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:39:10.145642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:39:10.154013 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 01:39:10.154081 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:39:10.159878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:39:10.159940 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:39:10.163222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:39:10.163277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:39:10.170237 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:39:10.170378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:39:10.175766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:39:10.189214 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:39:10.198356 systemd[1]: Switching root. Mar 6 01:39:10.356195 systemd-journald[195]: Journal stopped Mar 6 01:39:11.769924 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:39:11.770002 kernel: SELinux: policy capability open_perms=1 Mar 6 01:39:11.770016 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:39:11.770027 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:39:11.770038 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:39:11.770049 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:39:11.770065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:39:11.770085 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:39:11.770097 kernel: audit: type=1403 audit(1772761150.513:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:39:11.770114 systemd[1]: Successfully loaded SELinux policy in 56.379ms. Mar 6 01:39:11.770252 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.605ms. Mar 6 01:39:11.770271 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:39:11.770284 systemd[1]: Detected virtualization kvm. Mar 6 01:39:11.770296 systemd[1]: Detected architecture x86-64. Mar 6 01:39:11.770308 systemd[1]: Detected first boot. Mar 6 01:39:11.770323 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:39:11.770335 zram_generator::config[1069]: No configuration found. Mar 6 01:39:11.770349 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:39:11.770361 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:39:11.770607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:39:11.770630 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:39:11.770643 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:39:11.770655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:39:11.770672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:39:11.770685 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:39:11.770697 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:39:11.770709 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:39:11.770721 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:39:11.770734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:39:11.770746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:39:11.770758 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:39:11.770770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:39:11.770841 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:39:11.770857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:39:11.770869 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:39:11.770880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:39:11.770892 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:39:11.770904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:39:11.770915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:39:11.770927 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:39:11.770939 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:39:11.770955 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:39:11.770967 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:39:11.770979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:39:11.770991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:39:11.771002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:39:11.771014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:39:11.771026 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:39:11.771038 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:39:11.771054 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:39:11.771066 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:39:11.771077 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:39:11.771089 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:39:11.771101 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:39:11.771112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:39:11.771123 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:39:11.771135 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:39:11.771147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:39:11.771162 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:39:11.771174 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:39:11.771185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:39:11.771197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:39:11.771209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:39:11.771220 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:39:11.771232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:39:11.771244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:39:11.771258 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 6 01:39:11.771275 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 6 01:39:11.771287 kernel: fuse: init (API version 7.39) Mar 6 01:39:11.771299 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:39:11.771317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:39:11.771329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:39:11.771340 kernel: ACPI: bus type drm_connector registered Mar 6 01:39:11.771351 kernel: loop: module loaded Mar 6 01:39:11.771363 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:39:11.771401 systemd-journald[1170]: Collecting audit messages is disabled. Mar 6 01:39:11.771424 systemd-journald[1170]: Journal started Mar 6 01:39:11.771443 systemd-journald[1170]: Runtime Journal (/run/log/journal/e7c932bc5e4541fdaccc72e01a5884ca) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:39:11.779231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:39:11.784926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:39:11.790847 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:39:11.795307 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:39:11.798252 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:39:11.802030 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:39:11.804751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:39:11.807937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:39:11.811111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:39:11.814318 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:39:11.818453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:39:11.822320 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:39:11.822684 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:39:11.826317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:39:11.826677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:39:11.830277 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:39:11.830629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:39:11.834016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:39:11.834322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:39:11.838029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:39:11.838271 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:39:11.842099 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:39:11.842375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:39:11.847075 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:39:11.856356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:39:11.860370 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:39:11.880282 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:39:11.897069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:39:11.902612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:39:11.907136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:39:11.910083 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:39:11.920016 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:39:11.924892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:39:11.934019 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:39:11.937769 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:39:11.941123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:39:11.951010 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:39:11.951538 systemd-journald[1170]: Time spent on flushing to /var/log/journal/e7c932bc5e4541fdaccc72e01a5884ca is 27.211ms for 931 entries. Mar 6 01:39:11.951538 systemd-journald[1170]: System Journal (/var/log/journal/e7c932bc5e4541fdaccc72e01a5884ca) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:39:12.003118 systemd-journald[1170]: Received client request to flush runtime journal. Mar 6 01:39:11.964873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:39:11.970352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:39:11.976305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:39:11.982333 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:39:11.992028 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:39:12.006111 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:39:12.011731 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:39:12.013956 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 6 01:39:12.013969 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 6 01:39:12.021981 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:39:12.034193 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:39:12.034946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:39:12.043285 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 6 01:39:12.073626 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:39:12.085083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:39:12.124276 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 6 01:39:12.124331 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 6 01:39:12.134416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:39:12.571897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:39:12.590031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:39:12.625709 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Mar 6 01:39:12.657901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:39:12.674026 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:39:12.698038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:39:12.712035 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 6 01:39:12.758902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1253) Mar 6 01:39:12.783237 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:39:12.833914 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:39:12.834598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:39:12.856043 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:39:12.856113 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:39:12.856386 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:39:12.856657 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:39:12.879146 systemd-networkd[1245]: lo: Link UP Mar 6 01:39:12.879156 systemd-networkd[1245]: lo: Gained carrier Mar 6 01:39:12.882585 systemd-networkd[1245]: Enumeration completed Mar 6 01:39:12.883578 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:39:12.883677 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:39:12.883944 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:39:12.885206 systemd-networkd[1245]: eth0: Link UP Mar 6 01:39:12.885350 systemd-networkd[1245]: eth0: Gained carrier Mar 6 01:39:12.885404 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:39:12.901255 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:39:12.913028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:39:12.917873 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:39:12.921877 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:39:12.988854 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:39:13.008303 kernel: kvm_amd: TSC scaling supported Mar 6 01:39:13.008394 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:39:13.008419 kernel: kvm_amd: Nested Paging enabled Mar 6 01:39:13.010675 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:39:13.010706 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:39:13.066851 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:39:13.101892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:39:13.200107 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:39:13.205300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:39:13.224649 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:39:13.269700 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:39:13.274284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:39:13.285985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:39:13.294204 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:39:13.339766 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:39:13.344089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:39:13.349029 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:39:13.349058 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:39:13.352570 systemd[1]: Reached target machines.target - Containers. Mar 6 01:39:13.357324 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:39:13.373065 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:39:13.378966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:39:13.382224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:39:13.384041 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:39:13.389974 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:39:13.396100 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:39:13.397625 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:39:13.418902 kernel: loop0: detected capacity change from 0 to 140768 Mar 6 01:39:13.429176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:39:13.436423 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:39:13.437669 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:39:13.454906 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:39:13.481851 kernel: loop1: detected capacity change from 0 to 142488 Mar 6 01:39:13.531893 kernel: loop2: detected capacity change from 0 to 228704 Mar 6 01:39:13.569064 kernel: loop3: detected capacity change from 0 to 140768 Mar 6 01:39:13.589863 kernel: loop4: detected capacity change from 0 to 142488 Mar 6 01:39:13.609860 kernel: loop5: detected capacity change from 0 to 228704 Mar 6 01:39:13.620217 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:39:13.621078 (sd-merge)[1306]: Merged extensions into '/usr'. Mar 6 01:39:13.626271 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:39:13.626305 systemd[1]: Reloading... Mar 6 01:39:13.700851 zram_generator::config[1331]: No configuration found. Mar 6 01:39:13.786698 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:39:13.900765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:39:13.966821 systemd[1]: Reloading finished in 339 ms. Mar 6 01:39:13.990095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:39:14.007639 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:39:14.029102 systemd[1]: Starting ensure-sysext.service... Mar 6 01:39:14.032648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:39:14.038382 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:39:14.038428 systemd[1]: Reloading... Mar 6 01:39:14.066721 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:39:14.067218 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:39:14.068376 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:39:14.068717 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 6 01:39:14.068914 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 6 01:39:14.077062 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:39:14.077081 systemd-tmpfiles[1379]: Skipping /boot Mar 6 01:39:14.099377 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:39:14.099577 systemd-tmpfiles[1379]: Skipping /boot Mar 6 01:39:14.103021 zram_generator::config[1410]: No configuration found. Mar 6 01:39:14.232656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:39:14.305330 systemd[1]: Reloading finished in 266 ms. Mar 6 01:39:14.330076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:39:14.363009 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:39:14.367698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:39:14.372630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:39:14.379577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:39:14.385956 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:39:14.397235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:39:14.397502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:39:14.401292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:39:14.407100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:39:14.422155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:39:14.428939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:39:14.432770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:39:14.433111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:39:14.435601 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:39:14.442260 augenrules[1481]: No rules Mar 6 01:39:14.442497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:39:14.442725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:39:14.447159 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:39:14.452065 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:39:14.452338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:39:14.456632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:39:14.456932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:39:14.461397 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:39:14.465728 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:39:14.466098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:39:14.474520 systemd[1]: Finished ensure-sysext.service. Mar 6 01:39:14.483864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:39:14.484030 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:39:14.489964 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:39:14.490172 systemd-resolved[1462]: Positive Trust Anchors: Mar 6 01:39:14.490200 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:39:14.490228 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:39:14.495024 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:39:14.495768 systemd-resolved[1462]: Defaulting to hostname 'linux'. Mar 6 01:39:14.497995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:39:14.498754 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:39:14.502742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:39:14.509198 systemd[1]: Reached target network.target - Network. Mar 6 01:39:14.512034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:39:14.517201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:39:14.581366 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:39:15.577516 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:39:15.577571 systemd-resolved[1462]: Clock change detected. Flushing caches. Mar 6 01:39:15.577589 systemd-timesyncd[1500]: Initial clock synchronization to Fri 2026-03-06 01:39:15.577362 UTC. Mar 6 01:39:15.581137 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:39:15.584482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:39:15.588040 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:39:15.591622 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:39:15.596057 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:39:15.596143 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:39:15.600221 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:39:15.604813 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:39:15.608172 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:39:15.612026 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:39:15.616416 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:39:15.622352 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:39:15.627030 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:39:15.633839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:39:15.637029 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:39:15.640097 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:39:15.643327 systemd[1]: System is tainted: cgroupsv1 Mar 6 01:39:15.643395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:39:15.643425 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:39:15.645190 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:39:15.650344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:39:15.654981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:39:15.661914 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:39:15.666771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:39:15.671201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:39:15.673564 jq[1510]: false Mar 6 01:39:15.675392 systemd-networkd[1245]: eth0: Gained IPv6LL Mar 6 01:39:15.679585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:39:15.686713 dbus-daemon[1509]: [system] SELinux support is enabled Mar 6 01:39:15.687741 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:39:15.694311 extend-filesystems[1512]: Found loop3 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found loop4 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found loop5 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found sr0 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda1 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda2 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda3 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found usr Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda4 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda6 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda7 Mar 6 01:39:15.696711 extend-filesystems[1512]: Found vda9 Mar 6 01:39:15.696711 extend-filesystems[1512]: Checking size of /dev/vda9 Mar 6 01:39:15.756207 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1236) Mar 6 01:39:15.756240 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:39:15.695771 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:39:15.756447 extend-filesystems[1512]: Resized partition /dev/vda9 Mar 6 01:39:15.706458 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:39:15.759456 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:39:15.715819 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:39:15.721756 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:39:15.772783 update_engine[1533]: I20260306 01:39:15.765584 1533 main.cc:92] Flatcar Update Engine starting Mar 6 01:39:15.772783 update_engine[1533]: I20260306 01:39:15.766888 1533 update_check_scheduler.cc:74] Next update check in 10m25s Mar 6 01:39:15.743463 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:39:15.771019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:39:15.780735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:39:15.811555 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:39:15.811626 jq[1537]: true Mar 6 01:39:15.794044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:39:15.794540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:39:15.795044 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:39:15.795577 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:39:15.804763 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:39:15.805493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:39:15.812570 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:39:15.812570 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:39:15.812570 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:39:15.821358 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Mar 6 01:39:15.829445 jq[1545]: true Mar 6 01:39:15.830981 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:39:15.831389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:39:15.838755 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:39:15.841658 systemd-logind[1525]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:39:15.841689 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:39:15.844517 systemd-logind[1525]: New seat seat0. Mar 6 01:39:15.856536 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:39:15.864186 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 01:39:15.867624 tar[1543]: linux-amd64/LICENSE Mar 6 01:39:15.867966 tar[1543]: linux-amd64/helm Mar 6 01:39:15.877103 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:39:15.887220 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:39:15.901873 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:39:15.910625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:15.913862 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:39:15.917299 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:39:15.925864 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:39:15.930219 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:39:15.930792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:39:15.935382 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:39:15.935996 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:39:15.941439 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:39:15.945642 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:39:15.963317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:39:15.967725 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:39:15.981384 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:39:15.996444 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:39:16.001726 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:39:16.021766 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:39:16.022157 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:39:16.026917 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:39:16.027411 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:39:16.034829 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:39:16.040083 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:39:16.047781 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:39:16.073629 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:39:16.091838 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:39:16.098158 containerd[1547]: time="2026-03-06T01:39:16.097688902Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:39:16.098060 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:39:16.102529 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:39:16.122186 containerd[1547]: time="2026-03-06T01:39:16.122136597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.126371 containerd[1547]: time="2026-03-06T01:39:16.126237344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:39:16.126371 containerd[1547]: time="2026-03-06T01:39:16.126325780Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:39:16.126371 containerd[1547]: time="2026-03-06T01:39:16.126348281Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:39:16.126583 containerd[1547]: time="2026-03-06T01:39:16.126525332Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:39:16.126583 containerd[1547]: time="2026-03-06T01:39:16.126569164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.126685 containerd[1547]: time="2026-03-06T01:39:16.126650536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:39:16.126685 containerd[1547]: time="2026-03-06T01:39:16.126683678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127041 containerd[1547]: time="2026-03-06T01:39:16.126986093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127041 containerd[1547]: time="2026-03-06T01:39:16.127026568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127041 containerd[1547]: time="2026-03-06T01:39:16.127040444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127100 containerd[1547]: time="2026-03-06T01:39:16.127050844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127204 containerd[1547]: time="2026-03-06T01:39:16.127151060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127532 containerd[1547]: time="2026-03-06T01:39:16.127491867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127731 containerd[1547]: time="2026-03-06T01:39:16.127649361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:39:16.127731 containerd[1547]: time="2026-03-06T01:39:16.127692231Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:39:16.127840 containerd[1547]: time="2026-03-06T01:39:16.127791036Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:39:16.127865 containerd[1547]: time="2026-03-06T01:39:16.127849856Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:39:16.133242 containerd[1547]: time="2026-03-06T01:39:16.133162245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:39:16.133242 containerd[1547]: time="2026-03-06T01:39:16.133227317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:39:16.133242 containerd[1547]: time="2026-03-06T01:39:16.133242916Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:39:16.133957 containerd[1547]: time="2026-03-06T01:39:16.133300133Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:39:16.133957 containerd[1547]: time="2026-03-06T01:39:16.133317586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:39:16.133957 containerd[1547]: time="2026-03-06T01:39:16.133612807Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:39:16.134113 containerd[1547]: time="2026-03-06T01:39:16.133964875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:39:16.134113 containerd[1547]: time="2026-03-06T01:39:16.134089697Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:39:16.134113 containerd[1547]: time="2026-03-06T01:39:16.134104175Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:39:16.134162 containerd[1547]: time="2026-03-06T01:39:16.134115545Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:39:16.134162 containerd[1547]: time="2026-03-06T01:39:16.134128140Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134162 containerd[1547]: time="2026-03-06T01:39:16.134140422Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134162 containerd[1547]: time="2026-03-06T01:39:16.134151973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134164577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134178383Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134191267Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134203089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134213098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:39:16.134232 containerd[1547]: time="2026-03-06T01:39:16.134229859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134377 containerd[1547]: time="2026-03-06T01:39:16.134247683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134377 containerd[1547]: time="2026-03-06T01:39:16.134338161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134377 containerd[1547]: time="2026-03-06T01:39:16.134351657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134377 containerd[1547]: time="2026-03-06T01:39:16.134362446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134377 containerd[1547]: time="2026-03-06T01:39:16.134374920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134386321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134399576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134410647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134424562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134435634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134446905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134458736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134471 containerd[1547]: time="2026-03-06T01:39:16.134473665Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134492790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134504362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134515022Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134555788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134569203Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134579051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134590853Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134600080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134610801Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134624226Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:39:16.134642 containerd[1547]: time="2026-03-06T01:39:16.134633603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:39:16.134997 containerd[1547]: time="2026-03-06T01:39:16.134832244Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:39:16.134997 containerd[1547]: time="2026-03-06T01:39:16.134880173Z" level=info msg="Connect containerd service" Mar 6 01:39:16.134997 containerd[1547]: time="2026-03-06T01:39:16.134915429Z" level=info msg="using legacy CRI server" Mar 6 01:39:16.134997 containerd[1547]: time="2026-03-06T01:39:16.134922593Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:39:16.135211 containerd[1547]: time="2026-03-06T01:39:16.135056242Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:39:16.135799 containerd[1547]: time="2026-03-06T01:39:16.135758704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:39:16.136164 containerd[1547]: time="2026-03-06T01:39:16.136107746Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:39:16.136225 containerd[1547]: time="2026-03-06T01:39:16.136192835Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:39:16.136314 containerd[1547]: time="2026-03-06T01:39:16.136247557Z" level=info msg="Start subscribing containerd event" Mar 6 01:39:16.136342 containerd[1547]: time="2026-03-06T01:39:16.136324290Z" level=info msg="Start recovering state" Mar 6 01:39:16.136412 containerd[1547]: time="2026-03-06T01:39:16.136381657Z" level=info msg="Start event monitor" Mar 6 01:39:16.136434 containerd[1547]: time="2026-03-06T01:39:16.136415761Z" level=info msg="Start snapshots syncer" Mar 6 01:39:16.136434 containerd[1547]: time="2026-03-06T01:39:16.136425369Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:39:16.136434 containerd[1547]: time="2026-03-06T01:39:16.136432843Z" level=info msg="Start streaming server" Mar 6 01:39:16.136836 containerd[1547]: time="2026-03-06T01:39:16.136514676Z" level=info msg="containerd successfully booted in 0.041362s" Mar 6 01:39:16.136664 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:39:16.362290 tar[1543]: linux-amd64/README.md Mar 6 01:39:16.379348 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:39:16.764099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:16.768177 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:39:16.770338 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:39:16.773217 systemd[1]: Startup finished in 9.504s (kernel) + 5.319s (userspace) = 14.823s. Mar 6 01:39:17.241086 kubelet[1646]: E0306 01:39:17.240890 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:39:17.245163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:39:17.245590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:39:19.424322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:39:19.437686 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Mar 6 01:39:19.503766 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:39:19.506709 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:39:19.519973 systemd-logind[1525]: New session 1 of user core. Mar 6 01:39:19.521117 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:39:19.529878 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:39:19.545916 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:39:19.563633 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:39:19.567738 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:39:19.672745 systemd[1666]: Queued start job for default target default.target. Mar 6 01:39:19.673305 systemd[1666]: Created slice app.slice - User Application Slice. Mar 6 01:39:19.673327 systemd[1666]: Reached target paths.target - Paths. Mar 6 01:39:19.673340 systemd[1666]: Reached target timers.target - Timers. Mar 6 01:39:19.684482 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:39:19.692622 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:39:19.692711 systemd[1666]: Reached target sockets.target - Sockets. Mar 6 01:39:19.692725 systemd[1666]: Reached target basic.target - Basic System. Mar 6 01:39:19.692773 systemd[1666]: Reached target default.target - Main User Target. Mar 6 01:39:19.692814 systemd[1666]: Startup finished in 116ms. Mar 6 01:39:19.693460 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:39:19.696041 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:39:19.760830 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:39620.service - OpenSSH per-connection server daemon (10.0.0.1:39620). Mar 6 01:39:19.794456 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 39620 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:39:19.796729 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:39:19.803585 systemd-logind[1525]: New session 2 of user core. Mar 6 01:39:19.819628 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:39:19.878052 sshd[1678]: pam_unix(sshd:session): session closed for user core Mar 6 01:39:19.885510 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:39628.service - OpenSSH per-connection server daemon (10.0.0.1:39628). Mar 6 01:39:19.886059 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:39620.service: Deactivated successfully. Mar 6 01:39:19.888620 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:39:19.889758 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:39:19.892733 systemd-logind[1525]: Removed session 2. Mar 6 01:39:19.915460 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 39628 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:39:19.917053 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:39:19.922857 systemd-logind[1525]: New session 3 of user core. Mar 6 01:39:19.932694 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:39:19.984710 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 6 01:39:19.994519 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:39634.service - OpenSSH per-connection server daemon (10.0.0.1:39634). Mar 6 01:39:19.995074 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:39628.service: Deactivated successfully. Mar 6 01:39:19.998212 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:39:19.999371 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:39:20.000424 systemd-logind[1525]: Removed session 3. Mar 6 01:39:20.033106 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 39634 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:39:20.035392 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:39:20.041089 systemd-logind[1525]: New session 4 of user core. Mar 6 01:39:20.050577 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:39:20.107823 sshd[1691]: pam_unix(sshd:session): session closed for user core Mar 6 01:39:20.121615 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:39642.service - OpenSSH per-connection server daemon (10.0.0.1:39642). Mar 6 01:39:20.122408 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:39634.service: Deactivated successfully. Mar 6 01:39:20.125678 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:39:20.126686 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:39:20.128447 systemd-logind[1525]: Removed session 4. Mar 6 01:39:20.153764 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 39642 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:39:20.155793 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:39:20.161065 systemd-logind[1525]: New session 5 of user core. Mar 6 01:39:20.170584 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:39:20.236578 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:39:20.237027 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:39:20.529731 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:39:20.530134 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:39:20.821096 dockerd[1724]: time="2026-03-06T01:39:20.820754704Z" level=info msg="Starting up" Mar 6 01:39:21.128811 dockerd[1724]: time="2026-03-06T01:39:21.128557975Z" level=info msg="Loading containers: start." Mar 6 01:39:21.282303 kernel: Initializing XFRM netlink socket Mar 6 01:39:21.394394 systemd-networkd[1245]: docker0: Link UP Mar 6 01:39:21.420776 dockerd[1724]: time="2026-03-06T01:39:21.420704345Z" level=info msg="Loading containers: done." Mar 6 01:39:21.441233 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1182032273-merged.mount: Deactivated successfully. Mar 6 01:39:21.443037 dockerd[1724]: time="2026-03-06T01:39:21.442912659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:39:21.443147 dockerd[1724]: time="2026-03-06T01:39:21.443114797Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:39:21.443365 dockerd[1724]: time="2026-03-06T01:39:21.443323576Z" level=info msg="Daemon has completed initialization" Mar 6 01:39:21.498326 dockerd[1724]: time="2026-03-06T01:39:21.497646911Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:39:21.497871 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:39:22.013009 containerd[1547]: time="2026-03-06T01:39:22.012907875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 01:39:22.564612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122593055.mount: Deactivated successfully. Mar 6 01:39:23.530504 containerd[1547]: time="2026-03-06T01:39:23.530406672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:23.531151 containerd[1547]: time="2026-03-06T01:39:23.531100012Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 01:39:23.532498 containerd[1547]: time="2026-03-06T01:39:23.532443611Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:23.536082 containerd[1547]: time="2026-03-06T01:39:23.535927731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:23.537011 containerd[1547]: time="2026-03-06T01:39:23.536907949Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.523896932s" Mar 6 01:39:23.537011 containerd[1547]: time="2026-03-06T01:39:23.536975916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 01:39:23.537748 containerd[1547]: time="2026-03-06T01:39:23.537688091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 01:39:24.699718 containerd[1547]: time="2026-03-06T01:39:24.699591324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:24.701136 containerd[1547]: time="2026-03-06T01:39:24.700564594Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 01:39:24.702449 containerd[1547]: time="2026-03-06T01:39:24.702330675Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:24.705904 containerd[1547]: time="2026-03-06T01:39:24.705798052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:24.707073 containerd[1547]: time="2026-03-06T01:39:24.707003784Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.169280246s" Mar 6 01:39:24.707073 containerd[1547]: time="2026-03-06T01:39:24.707056131Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 01:39:24.707883 containerd[1547]: time="2026-03-06T01:39:24.707826294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 01:39:25.652702 containerd[1547]: time="2026-03-06T01:39:25.652548539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:25.653979 containerd[1547]: time="2026-03-06T01:39:25.653891265Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 01:39:25.655439 containerd[1547]: time="2026-03-06T01:39:25.655335272Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:25.660104 containerd[1547]: time="2026-03-06T01:39:25.660003711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:25.664326 containerd[1547]: time="2026-03-06T01:39:25.662019610Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 954.123414ms" Mar 6 01:39:25.664326 containerd[1547]: time="2026-03-06T01:39:25.662073690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 01:39:25.664560 containerd[1547]: time="2026-03-06T01:39:25.664462998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 01:39:26.679913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214882080.mount: Deactivated successfully. Mar 6 01:39:27.214023 containerd[1547]: time="2026-03-06T01:39:27.213881418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:27.215902 containerd[1547]: time="2026-03-06T01:39:27.215808166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 01:39:27.217788 containerd[1547]: time="2026-03-06T01:39:27.217701640Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:27.221894 containerd[1547]: time="2026-03-06T01:39:27.221787192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:27.222656 containerd[1547]: time="2026-03-06T01:39:27.222578958Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.558059694s" Mar 6 01:39:27.222656 containerd[1547]: time="2026-03-06T01:39:27.222631516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 01:39:27.223575 containerd[1547]: time="2026-03-06T01:39:27.223495652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 01:39:27.495656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:39:27.516476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:27.731753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069799758.mount: Deactivated successfully. Mar 6 01:39:27.739713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:27.743143 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:39:27.806483 kubelet[1961]: E0306 01:39:27.806192 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:39:27.812550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:39:27.812920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:39:29.187809 containerd[1547]: time="2026-03-06T01:39:29.187658214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:29.188840 containerd[1547]: time="2026-03-06T01:39:29.188531598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 01:39:29.190109 containerd[1547]: time="2026-03-06T01:39:29.190043654Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:29.194582 containerd[1547]: time="2026-03-06T01:39:29.194486533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:29.195598 containerd[1547]: time="2026-03-06T01:39:29.195499960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.971938926s" Mar 6 01:39:29.195598 containerd[1547]: time="2026-03-06T01:39:29.195547269Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 01:39:29.196484 containerd[1547]: time="2026-03-06T01:39:29.196410739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 01:39:29.911859 kernel: hrtimer: interrupt took 3391643 ns Mar 6 01:39:30.726517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551809984.mount: Deactivated successfully. Mar 6 01:39:30.738373 containerd[1547]: time="2026-03-06T01:39:30.738128244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:30.741138 containerd[1547]: time="2026-03-06T01:39:30.740377760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 01:39:30.742067 containerd[1547]: time="2026-03-06T01:39:30.741992351Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:30.759426 containerd[1547]: time="2026-03-06T01:39:30.759176314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:30.760083 containerd[1547]: time="2026-03-06T01:39:30.759999932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.563520646s" Mar 6 01:39:30.760083 containerd[1547]: time="2026-03-06T01:39:30.760040637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 01:39:30.761172 containerd[1547]: time="2026-03-06T01:39:30.761126483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 01:39:31.523054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074273746.mount: Deactivated successfully. Mar 6 01:39:34.925530 containerd[1547]: time="2026-03-06T01:39:34.925227784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:34.926434 containerd[1547]: time="2026-03-06T01:39:34.926381947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 01:39:34.928083 containerd[1547]: time="2026-03-06T01:39:34.928000662Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:34.932015 containerd[1547]: time="2026-03-06T01:39:34.931931680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:39:34.933129 containerd[1547]: time="2026-03-06T01:39:34.933086254Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.171898105s" Mar 6 01:39:34.933129 containerd[1547]: time="2026-03-06T01:39:34.933127701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 01:39:38.064083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:39:38.075526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:38.293821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:38.300050 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:39:38.641567 kubelet[2124]: E0306 01:39:38.641439 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:39:38.650618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:39:38.651139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:39:40.100379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:40.113544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:40.160806 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit session-5.scope)... Mar 6 01:39:40.160925 systemd[1]: Reloading... Mar 6 01:39:40.279572 zram_generator::config[2181]: No configuration found. Mar 6 01:39:40.464109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:39:40.572886 systemd[1]: Reloading finished in 411 ms. Mar 6 01:39:40.625818 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:39:40.626063 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:39:40.626757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:40.629475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:40.893420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:40.911529 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:39:41.057700 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:39:41.057700 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:39:41.057700 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:39:41.057700 kubelet[2239]: I0306 01:39:41.057566 2239 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:39:41.666110 kubelet[2239]: I0306 01:39:41.666005 2239 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:39:41.666110 kubelet[2239]: I0306 01:39:41.666072 2239 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:39:41.666435 kubelet[2239]: I0306 01:39:41.666389 2239 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:39:41.698443 kubelet[2239]: E0306 01:39:41.698351 2239 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:39:41.700527 kubelet[2239]: I0306 01:39:41.700428 2239 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:39:41.714650 kubelet[2239]: E0306 01:39:41.714564 2239 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:39:41.714650 kubelet[2239]: I0306 01:39:41.714623 2239 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:39:41.723326 kubelet[2239]: I0306 01:39:41.723125 2239 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:39:41.725006 kubelet[2239]: I0306 01:39:41.724850 2239 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:39:41.725232 kubelet[2239]: I0306 01:39:41.724930 2239 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 6 01:39:41.725232 kubelet[2239]: I0306 01:39:41.725215 2239 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:39:41.725232 kubelet[2239]: I0306 01:39:41.725227 2239 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:39:41.725512 kubelet[2239]: I0306 01:39:41.725476 2239 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:39:41.734240 kubelet[2239]: I0306 01:39:41.734153 2239 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:39:41.734384 kubelet[2239]: I0306 01:39:41.734346 2239 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:39:41.734384 kubelet[2239]: I0306 01:39:41.734382 2239 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:39:41.736820 kubelet[2239]: I0306 01:39:41.736342 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:39:41.740468 kubelet[2239]: E0306 01:39:41.740434 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:39:41.740597 kubelet[2239]: E0306 01:39:41.740483 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:39:41.742058 kubelet[2239]: I0306 01:39:41.741755 2239 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:39:41.742398 kubelet[2239]: I0306 01:39:41.742330 2239 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:39:42.312893 kubelet[2239]: W0306 01:39:42.312587 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:39:42.324362 kubelet[2239]: I0306 01:39:42.324202 2239 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:39:42.324457 kubelet[2239]: I0306 01:39:42.324404 2239 server.go:1289] "Started kubelet" Mar 6 01:39:42.324652 kubelet[2239]: I0306 01:39:42.324595 2239 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:39:42.327318 kubelet[2239]: I0306 01:39:42.326492 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:39:42.329393 kubelet[2239]: I0306 01:39:42.328599 2239 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:39:42.332435 kubelet[2239]: I0306 01:39:42.330327 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:39:42.332435 kubelet[2239]: I0306 01:39:42.330519 2239 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:39:42.332435 kubelet[2239]: E0306 01:39:42.329069 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1ce7200aba56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:39:42.324308566 +0000 UTC m=+1.379523403,LastTimestamp:2026-03-06 01:39:42.324308566 +0000 UTC m=+1.379523403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:39:42.336542 kubelet[2239]: I0306 01:39:42.336426 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:39:42.341034 kubelet[2239]: E0306 01:39:42.340891 2239 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:39:42.341574 kubelet[2239]: E0306 01:39:42.341459 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:39:42.341673 kubelet[2239]: I0306 01:39:42.341585 2239 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:39:42.343832 kubelet[2239]: I0306 01:39:42.342545 2239 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:39:42.343832 kubelet[2239]: I0306 01:39:42.342613 2239 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:39:42.353584 kubelet[2239]: E0306 01:39:42.350934 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Mar 6 01:39:42.355236 kubelet[2239]: E0306 01:39:42.355042 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:39:42.356295 kubelet[2239]: I0306 01:39:42.356172 2239 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:39:42.356617 kubelet[2239]: I0306 01:39:42.356505 2239 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:39:42.360706 kubelet[2239]: I0306 01:39:42.360563 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:39:42.366467 kubelet[2239]: I0306 01:39:42.364775 2239 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:39:42.416595 kubelet[2239]: I0306 01:39:42.416562 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:39:42.417032 kubelet[2239]: I0306 01:39:42.416814 2239 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:39:42.417032 kubelet[2239]: I0306 01:39:42.416910 2239 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:39:42.417195 kubelet[2239]: I0306 01:39:42.417182 2239 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:39:42.418156 kubelet[2239]: E0306 01:39:42.417986 2239 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:39:42.418622 kubelet[2239]: E0306 01:39:42.418160 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:39:42.431055 kubelet[2239]: I0306 01:39:42.430946 2239 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:39:42.431055 kubelet[2239]: I0306 01:39:42.431010 2239 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:39:42.431055 kubelet[2239]: I0306 01:39:42.431032 2239 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:39:42.442913 kubelet[2239]: E0306 01:39:42.442590 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:39:42.503329 kubelet[2239]: I0306 01:39:42.503186 2239 policy_none.go:49] "None policy: Start" Mar 6 01:39:42.503518 kubelet[2239]: I0306 01:39:42.503372 2239 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:39:42.503518 kubelet[2239]: I0306 01:39:42.503402 2239 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:39:42.515504 kubelet[2239]: E0306 01:39:42.515453 2239 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:39:42.515914 kubelet[2239]: I0306 01:39:42.515863 2239 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:39:42.516014 kubelet[2239]: I0306 01:39:42.515920 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:39:42.517496 kubelet[2239]: I0306 01:39:42.517452 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:39:42.519075 kubelet[2239]: E0306 01:39:42.519005 2239 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:39:42.519192 kubelet[2239]: E0306 01:39:42.519163 2239 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:39:42.529949 kubelet[2239]: E0306 01:39:42.529896 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:42.535355 kubelet[2239]: E0306 01:39:42.535200 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:42.540538 kubelet[2239]: E0306 01:39:42.540491 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:42.553774 kubelet[2239]: E0306 01:39:42.553693 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Mar 6 01:39:42.618896 kubelet[2239]: I0306 01:39:42.618652 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:42.619412 kubelet[2239]: E0306 01:39:42.619320 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Mar 6 01:39:42.645030 kubelet[2239]: I0306 01:39:42.644804 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:42.645030 kubelet[2239]: I0306 01:39:42.644897 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:42.645030 kubelet[2239]: I0306 01:39:42.644919 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:42.645030 kubelet[2239]: I0306 01:39:42.644934 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:42.645748 kubelet[2239]: I0306 01:39:42.644951 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:42.645748 kubelet[2239]: I0306 01:39:42.645113 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:42.645748 kubelet[2239]: I0306 01:39:42.645370 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:42.651099 kubelet[2239]: I0306 01:39:42.645677 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:42.651099 kubelet[2239]: I0306 01:39:42.651028 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:42.777209 kubelet[2239]: E0306 01:39:42.777092 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:39:42.832351 kubelet[2239]: E0306 01:39:42.832132 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:42.856417 kubelet[2239]: E0306 01:39:42.855532 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:42.856417 kubelet[2239]: E0306 01:39:42.855581 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:42.857041 containerd[1547]: time="2026-03-06T01:39:42.856853379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53610c1b410b2aff2c50aec13e64bb9b,Namespace:kube-system,Attempt:0,}" Mar 6 01:39:42.858606 containerd[1547]: time="2026-03-06T01:39:42.857066390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 01:39:42.858689 kubelet[2239]: I0306 01:39:42.858145 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:42.862553 containerd[1547]: time="2026-03-06T01:39:42.861804395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 01:39:42.863147 kubelet[2239]: E0306 01:39:42.862885 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Mar 6 01:39:42.956809 kubelet[2239]: E0306 01:39:42.956158 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Mar 6 01:39:43.287867 kubelet[2239]: I0306 01:39:43.287221 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:43.287867 kubelet[2239]: E0306 01:39:43.287545 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:39:43.295897 kubelet[2239]: E0306 01:39:43.290000 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Mar 6 01:39:43.479077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011754507.mount: Deactivated successfully. Mar 6 01:39:43.485222 containerd[1547]: time="2026-03-06T01:39:43.485115079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:39:43.488587 containerd[1547]: time="2026-03-06T01:39:43.488460297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:39:43.489587 containerd[1547]: time="2026-03-06T01:39:43.489536335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:39:43.491089 containerd[1547]: time="2026-03-06T01:39:43.490899493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:39:43.492122 containerd[1547]: time="2026-03-06T01:39:43.492066142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:39:43.493350 containerd[1547]: time="2026-03-06T01:39:43.493314040Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:39:43.494242 containerd[1547]: time="2026-03-06T01:39:43.494130456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:39:43.497507 containerd[1547]: time="2026-03-06T01:39:43.497385392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:39:43.501393 containerd[1547]: time="2026-03-06T01:39:43.501227810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.135805ms" Mar 6 01:39:43.507551 containerd[1547]: time="2026-03-06T01:39:43.507505798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.29692ms" Mar 6 01:39:43.511425 containerd[1547]: time="2026-03-06T01:39:43.511333796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.201353ms" Mar 6 01:39:43.867758 kubelet[2239]: E0306 01:39:43.770727 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Mar 6 01:39:43.869034 kubelet[2239]: E0306 01:39:43.868992 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:39:43.869424 kubelet[2239]: E0306 01:39:43.869234 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:39:43.870410 kubelet[2239]: E0306 01:39:43.869759 2239 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:39:44.172453 kubelet[2239]: I0306 01:39:44.170746 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:44.172453 kubelet[2239]: E0306 01:39:44.171356 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Mar 6 01:39:44.235374 containerd[1547]: time="2026-03-06T01:39:44.234842650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:39:44.235374 containerd[1547]: time="2026-03-06T01:39:44.235038366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:39:44.235374 containerd[1547]: time="2026-03-06T01:39:44.235049858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.236172 containerd[1547]: time="2026-03-06T01:39:44.236133701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.375522 containerd[1547]: time="2026-03-06T01:39:44.375129482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:39:44.375522 containerd[1547]: time="2026-03-06T01:39:44.375436476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:39:44.375522 containerd[1547]: time="2026-03-06T01:39:44.375457155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.377683 containerd[1547]: time="2026-03-06T01:39:44.375654964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.380796 containerd[1547]: time="2026-03-06T01:39:44.380236879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:39:44.380796 containerd[1547]: time="2026-03-06T01:39:44.380344971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:39:44.380796 containerd[1547]: time="2026-03-06T01:39:44.380361442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.380796 containerd[1547]: time="2026-03-06T01:39:44.380550084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:39:44.651755 systemd[1]: run-containerd-runc-k8s.io-3497614455ff4cfeb65151eaa6e07d729c2f9b371325ee5e6ccd90fb03aa838e-runc.dImHWF.mount: Deactivated successfully. Mar 6 01:39:45.010040 kubelet[2239]: E0306 01:39:45.008733 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:39:45.027140 containerd[1547]: time="2026-03-06T01:39:45.027081715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"3497614455ff4cfeb65151eaa6e07d729c2f9b371325ee5e6ccd90fb03aa838e\"" Mar 6 01:39:45.027783 containerd[1547]: time="2026-03-06T01:39:45.027749583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53610c1b410b2aff2c50aec13e64bb9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"316ed7863f68136ce7f9a79a1172a9ce7f6c0378607f540df8ca4b76fbc35a31\"" Mar 6 01:39:45.033067 kubelet[2239]: E0306 01:39:45.032330 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:45.033874 kubelet[2239]: E0306 01:39:45.033847 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:45.049687 containerd[1547]: time="2026-03-06T01:39:45.049637575Z" level=info msg="CreateContainer within sandbox \"3497614455ff4cfeb65151eaa6e07d729c2f9b371325ee5e6ccd90fb03aa838e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:39:45.054667 containerd[1547]: time="2026-03-06T01:39:45.054628766Z" level=info msg="CreateContainer within sandbox \"316ed7863f68136ce7f9a79a1172a9ce7f6c0378607f540df8ca4b76fbc35a31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:39:45.063661 containerd[1547]: time="2026-03-06T01:39:45.063570372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"410a765c5c2f0781fd6031dcae4cf40e3f2a9774b0bab52ca2a3c89030a97d07\"" Mar 6 01:39:45.065078 kubelet[2239]: E0306 01:39:45.065009 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:45.074092 containerd[1547]: time="2026-03-06T01:39:45.074037436Z" level=info msg="CreateContainer within sandbox \"410a765c5c2f0781fd6031dcae4cf40e3f2a9774b0bab52ca2a3c89030a97d07\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:39:45.088109 containerd[1547]: time="2026-03-06T01:39:45.087868529Z" level=info msg="CreateContainer within sandbox \"3497614455ff4cfeb65151eaa6e07d729c2f9b371325ee5e6ccd90fb03aa838e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f8eaf4d37a57f052f00c267c6d6574b4a25164766b818d4519769a7c9b48d62\"" Mar 6 01:39:45.089486 containerd[1547]: time="2026-03-06T01:39:45.089361564Z" level=info msg="StartContainer for \"6f8eaf4d37a57f052f00c267c6d6574b4a25164766b818d4519769a7c9b48d62\"" Mar 6 01:39:45.090133 containerd[1547]: time="2026-03-06T01:39:45.090054435Z" level=info msg="CreateContainer within sandbox \"316ed7863f68136ce7f9a79a1172a9ce7f6c0378607f540df8ca4b76fbc35a31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a1a691abf1bca391b1592a91904814aa3f1f489c1be79a95abd1f910d934e9e\"" Mar 6 01:39:45.091089 containerd[1547]: time="2026-03-06T01:39:45.091044304Z" level=info msg="StartContainer for \"6a1a691abf1bca391b1592a91904814aa3f1f489c1be79a95abd1f910d934e9e\"" Mar 6 01:39:45.112048 containerd[1547]: time="2026-03-06T01:39:45.111955918Z" level=info msg="CreateContainer within sandbox \"410a765c5c2f0781fd6031dcae4cf40e3f2a9774b0bab52ca2a3c89030a97d07\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a817190a19eb0ec04bd7e14d7f0d42501d28ea47f6e1260599619cc5e3012be9\"" Mar 6 01:39:45.114309 containerd[1547]: time="2026-03-06T01:39:45.114206749Z" level=info msg="StartContainer for \"a817190a19eb0ec04bd7e14d7f0d42501d28ea47f6e1260599619cc5e3012be9\"" Mar 6 01:39:45.415500 containerd[1547]: time="2026-03-06T01:39:45.413474518Z" level=info msg="StartContainer for \"a817190a19eb0ec04bd7e14d7f0d42501d28ea47f6e1260599619cc5e3012be9\" returns successfully" Mar 6 01:39:45.628330 containerd[1547]: time="2026-03-06T01:39:45.628148896Z" level=info msg="StartContainer for \"6a1a691abf1bca391b1592a91904814aa3f1f489c1be79a95abd1f910d934e9e\" returns successfully" Mar 6 01:39:45.629670 kubelet[2239]: E0306 01:39:45.629582 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="3.2s" Mar 6 01:39:45.636514 containerd[1547]: time="2026-03-06T01:39:45.636477328Z" level=info msg="StartContainer for \"6f8eaf4d37a57f052f00c267c6d6574b4a25164766b818d4519769a7c9b48d62\" returns successfully" Mar 6 01:39:45.642598 kubelet[2239]: E0306 01:39:45.642551 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:45.643947 kubelet[2239]: E0306 01:39:45.643923 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:45.657588 kubelet[2239]: E0306 01:39:45.657173 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:45.657588 kubelet[2239]: E0306 01:39:45.657418 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:45.780410 kubelet[2239]: I0306 01:39:45.776922 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:46.683245 kubelet[2239]: E0306 01:39:46.683050 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:46.684548 kubelet[2239]: E0306 01:39:46.684512 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:46.689393 kubelet[2239]: E0306 01:39:46.684943 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:46.689393 kubelet[2239]: E0306 01:39:46.685224 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:47.665697 kubelet[2239]: E0306 01:39:47.665549 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:47.665852 kubelet[2239]: E0306 01:39:47.665773 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:48.667566 kubelet[2239]: E0306 01:39:48.667423 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:48.668168 kubelet[2239]: E0306 01:39:48.667682 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:49.509042 kubelet[2239]: E0306 01:39:49.508875 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:39:49.509297 kubelet[2239]: E0306 01:39:49.509096 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:50.341665 kubelet[2239]: E0306 01:39:50.341516 2239 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:39:50.436133 kubelet[2239]: I0306 01:39:50.436092 2239 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:39:50.440056 kubelet[2239]: E0306 01:39:50.440022 2239 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:39:50.449768 kubelet[2239]: I0306 01:39:50.445513 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:50.464097 kubelet[2239]: E0306 01:39:50.464050 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:50.465014 kubelet[2239]: I0306 01:39:50.464248 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:50.468044 kubelet[2239]: E0306 01:39:50.467911 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:50.468044 kubelet[2239]: I0306 01:39:50.468041 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:50.472447 kubelet[2239]: E0306 01:39:50.472423 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:50.783803 kubelet[2239]: I0306 01:39:50.781239 2239 apiserver.go:52] "Watching apiserver" Mar 6 01:39:50.856376 kubelet[2239]: I0306 01:39:50.855776 2239 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:39:52.992063 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-5.scope)... Mar 6 01:39:52.992107 systemd[1]: Reloading... Mar 6 01:39:53.083375 zram_generator::config[2572]: No configuration found. Mar 6 01:39:53.205526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:39:53.297991 systemd[1]: Reloading finished in 305 ms. Mar 6 01:39:53.342997 kubelet[2239]: I0306 01:39:53.342829 2239 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:39:53.343503 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:53.363050 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:39:53.363556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:53.372507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:39:53.539665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:39:53.553002 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:39:53.613576 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:39:53.613576 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:39:53.613576 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:39:53.614396 kubelet[2624]: I0306 01:39:53.613599 2624 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:39:53.628176 kubelet[2624]: I0306 01:39:53.628094 2624 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:39:53.628176 kubelet[2624]: I0306 01:39:53.628137 2624 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:39:53.628469 kubelet[2624]: I0306 01:39:53.628419 2624 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:39:53.629831 kubelet[2624]: I0306 01:39:53.629736 2624 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:39:53.632788 kubelet[2624]: I0306 01:39:53.632735 2624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:39:53.638713 kubelet[2624]: E0306 01:39:53.638675 2624 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:39:53.638713 kubelet[2624]: I0306 01:39:53.638707 2624 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:39:53.647061 kubelet[2624]: I0306 01:39:53.646977 2624 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:39:53.647786 kubelet[2624]: I0306 01:39:53.647702 2624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:39:53.647893 kubelet[2624]: I0306 01:39:53.647750 2624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 6 01:39:53.647893 kubelet[2624]: I0306 01:39:53.647872 2624 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:39:53.647893 kubelet[2624]: I0306 01:39:53.647881 2624 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:39:53.648093 kubelet[2624]: I0306 01:39:53.647968 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:39:53.648278 kubelet[2624]: I0306 01:39:53.648227 2624 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:39:53.648324 kubelet[2624]: I0306 01:39:53.648308 2624 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:39:53.648359 kubelet[2624]: I0306 01:39:53.648330 2624 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:39:53.648359 kubelet[2624]: I0306 01:39:53.648345 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:39:53.651469 kubelet[2624]: I0306 01:39:53.650551 2624 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:39:53.651469 kubelet[2624]: I0306 01:39:53.651394 2624 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:39:53.659051 kubelet[2624]: I0306 01:39:53.658982 2624 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:39:53.659107 kubelet[2624]: I0306 01:39:53.659058 2624 server.go:1289] "Started kubelet" Mar 6 01:39:53.662343 kubelet[2624]: I0306 01:39:53.662228 2624 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:39:53.665036 kubelet[2624]: I0306 01:39:53.664831 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:39:53.665341 kubelet[2624]: I0306 01:39:53.665322 2624 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:39:53.665819 kubelet[2624]: I0306 01:39:53.665613 2624 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:39:53.667678 kubelet[2624]: I0306 01:39:53.667460 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:39:53.669658 kubelet[2624]: I0306 01:39:53.669513 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:39:53.671755 kubelet[2624]: I0306 01:39:53.671597 2624 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:39:53.671888 kubelet[2624]: I0306 01:39:53.671779 2624 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:39:53.672365 kubelet[2624]: I0306 01:39:53.672023 2624 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:39:53.674545 kubelet[2624]: I0306 01:39:53.674451 2624 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:39:53.678678 kubelet[2624]: I0306 01:39:53.678593 2624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:39:53.680766 kubelet[2624]: I0306 01:39:53.680727 2624 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:39:53.681217 kubelet[2624]: E0306 01:39:53.681156 2624 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:39:53.697636 kubelet[2624]: I0306 01:39:53.697601 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:39:53.699875 kubelet[2624]: I0306 01:39:53.699855 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:39:53.700025 kubelet[2624]: I0306 01:39:53.700013 2624 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:39:53.700147 kubelet[2624]: I0306 01:39:53.700133 2624 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:39:53.700191 kubelet[2624]: I0306 01:39:53.700183 2624 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:39:53.700376 kubelet[2624]: E0306 01:39:53.700317 2624 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:39:53.762894 kubelet[2624]: I0306 01:39:53.762861 2624 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:39:53.763157 kubelet[2624]: I0306 01:39:53.763086 2624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:39:53.763157 kubelet[2624]: I0306 01:39:53.763137 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:39:53.763492 kubelet[2624]: I0306 01:39:53.763408 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:39:53.763492 kubelet[2624]: I0306 01:39:53.763426 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:39:53.763492 kubelet[2624]: I0306 01:39:53.763450 2624 policy_none.go:49] "None policy: Start" Mar 6 01:39:53.763492 kubelet[2624]: I0306 01:39:53.763464 2624 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:39:53.763492 kubelet[2624]: I0306 01:39:53.763479 2624 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:39:53.763701 kubelet[2624]: I0306 01:39:53.763647 2624 state_mem.go:75] "Updated machine memory state" Mar 6 01:39:53.766320 kubelet[2624]: E0306 01:39:53.766140 2624 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:39:53.766524 kubelet[2624]: I0306 01:39:53.766465 2624 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:39:53.766569 kubelet[2624]: I0306 01:39:53.766514 2624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:39:53.767673 kubelet[2624]: I0306 01:39:53.767630 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:39:53.770244 kubelet[2624]: E0306 01:39:53.770125 2624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:39:53.805797 kubelet[2624]: I0306 01:39:53.803987 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:53.805797 kubelet[2624]: I0306 01:39:53.804542 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.805797 kubelet[2624]: I0306 01:39:53.804566 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:53.872828 kubelet[2624]: I0306 01:39:53.872723 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:53.877706 kubelet[2624]: I0306 01:39:53.877653 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:39:53.888514 kubelet[2624]: I0306 01:39:53.888472 2624 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:39:53.888651 kubelet[2624]: I0306 01:39:53.888578 2624 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:39:53.973529 kubelet[2624]: I0306 01:39:53.973468 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:53.973715 kubelet[2624]: I0306 01:39:53.973604 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.973715 kubelet[2624]: I0306 01:39:53.973683 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.973764 kubelet[2624]: I0306 01:39:53.973721 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53610c1b410b2aff2c50aec13e64bb9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53610c1b410b2aff2c50aec13e64bb9b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:53.973860 kubelet[2624]: I0306 01:39:53.973780 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.973860 kubelet[2624]: I0306 01:39:53.973816 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.973860 kubelet[2624]: I0306 01:39:53.973851 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:53.974032 kubelet[2624]: I0306 01:39:53.973880 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:54.114843 kubelet[2624]: E0306 01:39:54.114701 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:54.116829 kubelet[2624]: E0306 01:39:54.116604 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:54.116829 kubelet[2624]: E0306 01:39:54.116656 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:54.652348 kubelet[2624]: I0306 01:39:54.652053 2624 apiserver.go:52] "Watching apiserver" Mar 6 01:39:54.672713 kubelet[2624]: I0306 01:39:54.672537 2624 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:39:54.966144 kubelet[2624]: I0306 01:39:54.965404 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:54.966763 kubelet[2624]: I0306 01:39:54.966693 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:54.967165 kubelet[2624]: I0306 01:39:54.967110 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:54.980234 kubelet[2624]: E0306 01:39:54.980130 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:39:54.980388 kubelet[2624]: E0306 01:39:54.980335 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:54.983233 kubelet[2624]: E0306 01:39:54.983110 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:39:54.985025 kubelet[2624]: E0306 01:39:54.984832 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:54.985094 kubelet[2624]: E0306 01:39:54.985078 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:39:54.985396 kubelet[2624]: E0306 01:39:54.985353 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:55.039600 kubelet[2624]: I0306 01:39:55.037381 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.037363529 podStartE2EDuration="2.037363529s" podCreationTimestamp="2026-03-06 01:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:39:55.036871746 +0000 UTC m=+1.476430079" watchObservedRunningTime="2026-03-06 01:39:55.037363529 +0000 UTC m=+1.476921844" Mar 6 01:39:55.095885 kubelet[2624]: I0306 01:39:55.095807 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.095792783 podStartE2EDuration="2.095792783s" podCreationTimestamp="2026-03-06 01:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:39:55.080872118 +0000 UTC m=+1.520430422" watchObservedRunningTime="2026-03-06 01:39:55.095792783 +0000 UTC m=+1.535351086" Mar 6 01:39:55.139137 kubelet[2624]: I0306 01:39:55.138854 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.138835428 podStartE2EDuration="2.138835428s" podCreationTimestamp="2026-03-06 01:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:39:55.097038929 +0000 UTC m=+1.536597243" watchObservedRunningTime="2026-03-06 01:39:55.138835428 +0000 UTC m=+1.578393732" Mar 6 01:39:55.841868 sudo[1706]: pam_unix(sudo:session): session closed for user root Mar 6 01:39:55.855825 sshd[1699]: pam_unix(sshd:session): session closed for user core Mar 6 01:39:55.862018 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:39642.service: Deactivated successfully. Mar 6 01:39:55.865529 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:39:55.865531 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:39:55.868296 systemd-logind[1525]: Removed session 5. Mar 6 01:39:56.009493 kubelet[2624]: E0306 01:39:56.008647 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:56.009493 kubelet[2624]: E0306 01:39:56.009177 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:56.014849 kubelet[2624]: E0306 01:39:56.014770 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:57.012865 kubelet[2624]: E0306 01:39:57.012669 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:58.015379 kubelet[2624]: E0306 01:39:58.015161 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:39:59.288619 kubelet[2624]: I0306 01:39:59.288486 2624 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:39:59.289163 containerd[1547]: time="2026-03-06T01:39:59.289089203Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:39:59.289615 kubelet[2624]: I0306 01:39:59.289468 2624 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:40:00.427932 kubelet[2624]: I0306 01:40:00.427805 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/022ec01f-676b-4b4f-bf03-a032069381f3-xtables-lock\") pod \"kube-proxy-x6cpz\" (UID: \"022ec01f-676b-4b4f-bf03-a032069381f3\") " pod="kube-system/kube-proxy-x6cpz" Mar 6 01:40:00.427932 kubelet[2624]: I0306 01:40:00.427907 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/022ec01f-676b-4b4f-bf03-a032069381f3-lib-modules\") pod \"kube-proxy-x6cpz\" (UID: \"022ec01f-676b-4b4f-bf03-a032069381f3\") " pod="kube-system/kube-proxy-x6cpz" Mar 6 01:40:00.427932 kubelet[2624]: I0306 01:40:00.427928 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6b54a522-be43-40b4-9c8d-defab3f94ed6-cni\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.427932 kubelet[2624]: I0306 01:40:00.427948 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6b54a522-be43-40b4-9c8d-defab3f94ed6-flannel-cfg\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.428821 kubelet[2624]: I0306 01:40:00.427966 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4jw\" (UniqueName: \"kubernetes.io/projected/6b54a522-be43-40b4-9c8d-defab3f94ed6-kube-api-access-jq4jw\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.428821 kubelet[2624]: I0306 01:40:00.428062 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b54a522-be43-40b4-9c8d-defab3f94ed6-xtables-lock\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.428821 kubelet[2624]: I0306 01:40:00.428114 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6b54a522-be43-40b4-9c8d-defab3f94ed6-run\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.428821 kubelet[2624]: I0306 01:40:00.428131 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6b54a522-be43-40b4-9c8d-defab3f94ed6-cni-plugin\") pod \"kube-flannel-ds-fwdt5\" (UID: \"6b54a522-be43-40b4-9c8d-defab3f94ed6\") " pod="kube-flannel/kube-flannel-ds-fwdt5" Mar 6 01:40:00.428821 kubelet[2624]: I0306 01:40:00.428150 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/022ec01f-676b-4b4f-bf03-a032069381f3-kube-proxy\") pod \"kube-proxy-x6cpz\" (UID: \"022ec01f-676b-4b4f-bf03-a032069381f3\") " pod="kube-system/kube-proxy-x6cpz" Mar 6 01:40:00.429082 kubelet[2624]: I0306 01:40:00.428164 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2f77\" (UniqueName: \"kubernetes.io/projected/022ec01f-676b-4b4f-bf03-a032069381f3-kube-api-access-m2f77\") pod \"kube-proxy-x6cpz\" (UID: \"022ec01f-676b-4b4f-bf03-a032069381f3\") " pod="kube-system/kube-proxy-x6cpz" Mar 6 01:40:00.651823 kubelet[2624]: E0306 01:40:00.651718 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:00.652754 containerd[1547]: time="2026-03-06T01:40:00.652670164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6cpz,Uid:022ec01f-676b-4b4f-bf03-a032069381f3,Namespace:kube-system,Attempt:0,}" Mar 6 01:40:00.656953 kubelet[2624]: E0306 01:40:00.656762 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:00.657605 containerd[1547]: time="2026-03-06T01:40:00.657537936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fwdt5,Uid:6b54a522-be43-40b4-9c8d-defab3f94ed6,Namespace:kube-flannel,Attempt:0,}" Mar 6 01:40:00.699627 containerd[1547]: time="2026-03-06T01:40:00.698649317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:40:00.700530 containerd[1547]: time="2026-03-06T01:40:00.700000190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:40:00.700530 containerd[1547]: time="2026-03-06T01:40:00.700112079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:40:00.700530 containerd[1547]: time="2026-03-06T01:40:00.700226702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:00.700745 containerd[1547]: time="2026-03-06T01:40:00.700657904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:00.701242 containerd[1547]: time="2026-03-06T01:40:00.698846595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:40:00.701242 containerd[1547]: time="2026-03-06T01:40:00.701118103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:00.701539 containerd[1547]: time="2026-03-06T01:40:00.701359552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:00.771736 containerd[1547]: time="2026-03-06T01:40:00.771657911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6cpz,Uid:022ec01f-676b-4b4f-bf03-a032069381f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07eb5898f2a79fa14274958b3a06154fc818b2d8985805cd6764441ba796158\"" Mar 6 01:40:00.774300 kubelet[2624]: E0306 01:40:00.772764 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:00.778282 containerd[1547]: time="2026-03-06T01:40:00.778164454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fwdt5,Uid:6b54a522-be43-40b4-9c8d-defab3f94ed6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\"" Mar 6 01:40:00.779648 kubelet[2624]: E0306 01:40:00.779605 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:00.781493 containerd[1547]: time="2026-03-06T01:40:00.781393114Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 6 01:40:00.784461 containerd[1547]: time="2026-03-06T01:40:00.784396037Z" level=info msg="CreateContainer within sandbox \"c07eb5898f2a79fa14274958b3a06154fc818b2d8985805cd6764441ba796158\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:40:00.806231 containerd[1547]: time="2026-03-06T01:40:00.806072922Z" level=info msg="CreateContainer within sandbox \"c07eb5898f2a79fa14274958b3a06154fc818b2d8985805cd6764441ba796158\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95f1d2385ca16babfcdc1bcb017af812e666d6fa34168f36011c526405749860\"" Mar 6 01:40:00.807051 containerd[1547]: time="2026-03-06T01:40:00.806983318Z" level=info msg="StartContainer for \"95f1d2385ca16babfcdc1bcb017af812e666d6fa34168f36011c526405749860\"" Mar 6 01:40:00.893789 containerd[1547]: time="2026-03-06T01:40:00.893671095Z" level=info msg="StartContainer for \"95f1d2385ca16babfcdc1bcb017af812e666d6fa34168f36011c526405749860\" returns successfully" Mar 6 01:40:00.954614 update_engine[1533]: I20260306 01:40:00.954427 1533 update_attempter.cc:509] Updating boot flags... Mar 6 01:40:00.981414 kubelet[2624]: E0306 01:40:00.981367 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:01.004585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2821) Mar 6 01:40:01.039324 kubelet[2624]: E0306 01:40:01.036768 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:01.039324 kubelet[2624]: E0306 01:40:01.037449 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:01.072838 kubelet[2624]: I0306 01:40:01.072567 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x6cpz" podStartSLOduration=1.072550162 podStartE2EDuration="1.072550162s" podCreationTimestamp="2026-03-06 01:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:40:01.072380466 +0000 UTC m=+7.511938800" watchObservedRunningTime="2026-03-06 01:40:01.072550162 +0000 UTC m=+7.512108466" Mar 6 01:40:01.110358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2823) Mar 6 01:40:01.180377 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2823) Mar 6 01:40:01.674022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111549836.mount: Deactivated successfully. Mar 6 01:40:01.991776 containerd[1547]: time="2026-03-06T01:40:01.991369689Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:01.993116 containerd[1547]: time="2026-03-06T01:40:01.993023225Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 6 01:40:01.994374 containerd[1547]: time="2026-03-06T01:40:01.994321284Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:01.998847 containerd[1547]: time="2026-03-06T01:40:01.998563029Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:02.000032 containerd[1547]: time="2026-03-06T01:40:01.999959291Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.218500887s" Mar 6 01:40:02.000089 containerd[1547]: time="2026-03-06T01:40:02.000027279Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 6 01:40:02.006247 containerd[1547]: time="2026-03-06T01:40:02.006128538Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 6 01:40:02.024056 containerd[1547]: time="2026-03-06T01:40:02.023968629Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6ec89d998a93d43a7c0e36abfa24f5a55f3256e14ce7bff4d9ca394aed4758db\"" Mar 6 01:40:02.024999 containerd[1547]: time="2026-03-06T01:40:02.024953934Z" level=info msg="StartContainer for \"6ec89d998a93d43a7c0e36abfa24f5a55f3256e14ce7bff4d9ca394aed4758db\"" Mar 6 01:40:02.025003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875760311.mount: Deactivated successfully. Mar 6 01:40:02.053675 kubelet[2624]: E0306 01:40:02.053633 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:02.144705 containerd[1547]: time="2026-03-06T01:40:02.144567572Z" level=info msg="StartContainer for \"6ec89d998a93d43a7c0e36abfa24f5a55f3256e14ce7bff4d9ca394aed4758db\" returns successfully" Mar 6 01:40:02.369118 containerd[1547]: time="2026-03-06T01:40:02.369034362Z" level=info msg="shim disconnected" id=6ec89d998a93d43a7c0e36abfa24f5a55f3256e14ce7bff4d9ca394aed4758db namespace=k8s.io Mar 6 01:40:02.369118 containerd[1547]: time="2026-03-06T01:40:02.369114170Z" level=warning msg="cleaning up after shim disconnected" id=6ec89d998a93d43a7c0e36abfa24f5a55f3256e14ce7bff4d9ca394aed4758db namespace=k8s.io Mar 6 01:40:02.369118 containerd[1547]: time="2026-03-06T01:40:02.369125561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:40:03.058850 kubelet[2624]: E0306 01:40:03.058718 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:03.063180 containerd[1547]: time="2026-03-06T01:40:03.062846001Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 6 01:40:04.012961 kubelet[2624]: E0306 01:40:04.010670 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:04.064616 kubelet[2624]: E0306 01:40:04.064526 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:05.136112 containerd[1547]: time="2026-03-06T01:40:05.136016102Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:05.137010 containerd[1547]: time="2026-03-06T01:40:05.136925751Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 6 01:40:05.138511 containerd[1547]: time="2026-03-06T01:40:05.138448624Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:05.142688 containerd[1547]: time="2026-03-06T01:40:05.142609783Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:40:05.145688 containerd[1547]: time="2026-03-06T01:40:05.145529056Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.082473886s" Mar 6 01:40:05.145688 containerd[1547]: time="2026-03-06T01:40:05.145629653Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 6 01:40:05.158079 containerd[1547]: time="2026-03-06T01:40:05.157993001Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:40:05.173342 containerd[1547]: time="2026-03-06T01:40:05.172444460Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349\"" Mar 6 01:40:05.173063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592556995.mount: Deactivated successfully. Mar 6 01:40:05.175562 containerd[1547]: time="2026-03-06T01:40:05.174618804Z" level=info msg="StartContainer for \"0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349\"" Mar 6 01:40:05.281302 kubelet[2624]: I0306 01:40:05.280142 2624 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 01:40:05.286345 containerd[1547]: time="2026-03-06T01:40:05.286152751Z" level=info msg="StartContainer for \"0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349\" returns successfully" Mar 6 01:40:05.343106 containerd[1547]: time="2026-03-06T01:40:05.342610663Z" level=info msg="shim disconnected" id=0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349 namespace=k8s.io Mar 6 01:40:05.343106 containerd[1547]: time="2026-03-06T01:40:05.342914528Z" level=warning msg="cleaning up after shim disconnected" id=0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349 namespace=k8s.io Mar 6 01:40:05.343106 containerd[1547]: time="2026-03-06T01:40:05.342935246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:40:05.427033 kubelet[2624]: I0306 01:40:05.426683 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnct\" (UniqueName: \"kubernetes.io/projected/505a1d29-0e09-459a-891a-31d549d357df-kube-api-access-4hnct\") pod \"coredns-674b8bbfcf-njtt2\" (UID: \"505a1d29-0e09-459a-891a-31d549d357df\") " pod="kube-system/coredns-674b8bbfcf-njtt2" Mar 6 01:40:05.427033 kubelet[2624]: I0306 01:40:05.426788 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12d874e2-2b61-4bdf-8f50-f7a6d124e83b-config-volume\") pod \"coredns-674b8bbfcf-n97nb\" (UID: \"12d874e2-2b61-4bdf-8f50-f7a6d124e83b\") " pod="kube-system/coredns-674b8bbfcf-n97nb" Mar 6 01:40:05.427033 kubelet[2624]: I0306 01:40:05.426825 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbrq\" (UniqueName: \"kubernetes.io/projected/12d874e2-2b61-4bdf-8f50-f7a6d124e83b-kube-api-access-wrbrq\") pod \"coredns-674b8bbfcf-n97nb\" (UID: \"12d874e2-2b61-4bdf-8f50-f7a6d124e83b\") " pod="kube-system/coredns-674b8bbfcf-n97nb" Mar 6 01:40:05.427033 kubelet[2624]: I0306 01:40:05.426901 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/505a1d29-0e09-459a-891a-31d549d357df-config-volume\") pod \"coredns-674b8bbfcf-njtt2\" (UID: \"505a1d29-0e09-459a-891a-31d549d357df\") " pod="kube-system/coredns-674b8bbfcf-njtt2" Mar 6 01:40:05.628807 kubelet[2624]: E0306 01:40:05.628233 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:05.630669 containerd[1547]: time="2026-03-06T01:40:05.630402831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n97nb,Uid:12d874e2-2b61-4bdf-8f50-f7a6d124e83b,Namespace:kube-system,Attempt:0,}" Mar 6 01:40:05.632423 kubelet[2624]: E0306 01:40:05.632053 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:05.632793 containerd[1547]: time="2026-03-06T01:40:05.632676582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-njtt2,Uid:505a1d29-0e09-459a-891a-31d549d357df,Namespace:kube-system,Attempt:0,}" Mar 6 01:40:05.963483 containerd[1547]: time="2026-03-06T01:40:05.950014874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-njtt2,Uid:505a1d29-0e09-459a-891a-31d549d357df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd5167b6d18cb37e439385cbbb730a585e801b8c6c3ff3ffbda70c3c89b2ee46\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 6 01:40:05.964227 kubelet[2624]: E0306 01:40:05.964167 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5167b6d18cb37e439385cbbb730a585e801b8c6c3ff3ffbda70c3c89b2ee46\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 6 01:40:05.964436 kubelet[2624]: E0306 01:40:05.964398 2624 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5167b6d18cb37e439385cbbb730a585e801b8c6c3ff3ffbda70c3c89b2ee46\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-njtt2" Mar 6 01:40:05.964511 kubelet[2624]: E0306 01:40:05.964441 2624 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5167b6d18cb37e439385cbbb730a585e801b8c6c3ff3ffbda70c3c89b2ee46\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-njtt2" Mar 6 01:40:05.967317 kubelet[2624]: E0306 01:40:05.964525 2624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-njtt2_kube-system(505a1d29-0e09-459a-891a-31d549d357df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-njtt2_kube-system(505a1d29-0e09-459a-891a-31d549d357df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd5167b6d18cb37e439385cbbb730a585e801b8c6c3ff3ffbda70c3c89b2ee46\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-njtt2" podUID="505a1d29-0e09-459a-891a-31d549d357df" Mar 6 01:40:05.981365 containerd[1547]: time="2026-03-06T01:40:05.980671974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n97nb,Uid:12d874e2-2b61-4bdf-8f50-f7a6d124e83b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c95887670984ecbf5be1c63e4ceacf6f8dccda6824c1f50bd5b90256acbbb845\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 6 01:40:05.981613 kubelet[2624]: E0306 01:40:05.980939 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95887670984ecbf5be1c63e4ceacf6f8dccda6824c1f50bd5b90256acbbb845\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 6 01:40:05.981613 kubelet[2624]: E0306 01:40:05.980984 2624 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95887670984ecbf5be1c63e4ceacf6f8dccda6824c1f50bd5b90256acbbb845\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-n97nb" Mar 6 01:40:05.981613 kubelet[2624]: E0306 01:40:05.981010 2624 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95887670984ecbf5be1c63e4ceacf6f8dccda6824c1f50bd5b90256acbbb845\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-n97nb" Mar 6 01:40:05.981613 kubelet[2624]: E0306 01:40:05.981071 2624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n97nb_kube-system(12d874e2-2b61-4bdf-8f50-f7a6d124e83b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n97nb_kube-system(12d874e2-2b61-4bdf-8f50-f7a6d124e83b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c95887670984ecbf5be1c63e4ceacf6f8dccda6824c1f50bd5b90256acbbb845\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-n97nb" podUID="12d874e2-2b61-4bdf-8f50-f7a6d124e83b" Mar 6 01:40:06.090477 kubelet[2624]: E0306 01:40:06.088838 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:06.112600 containerd[1547]: time="2026-03-06T01:40:06.112150816Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 6 01:40:06.140738 containerd[1547]: time="2026-03-06T01:40:06.139645025Z" level=info msg="CreateContainer within sandbox \"e80eed97b7a0dc59533d465b19543efd306b5754dfaf6410fd53cf5c8c3f1fcf\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f07982c87082c673a3ddc0f016e0f0f41a1fed97c1f6eb109e3c71b23c995ede\"" Mar 6 01:40:06.140738 containerd[1547]: time="2026-03-06T01:40:06.140554068Z" level=info msg="StartContainer for \"f07982c87082c673a3ddc0f016e0f0f41a1fed97c1f6eb109e3c71b23c995ede\"" Mar 6 01:40:06.177075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0560826294982418b7ac965fc71f7215c189c6f029c9925b87f7ffb6d3690349-rootfs.mount: Deactivated successfully. Mar 6 01:40:06.299331 containerd[1547]: time="2026-03-06T01:40:06.298740775Z" level=info msg="StartContainer for \"f07982c87082c673a3ddc0f016e0f0f41a1fed97c1f6eb109e3c71b23c995ede\" returns successfully" Mar 6 01:40:07.099325 kubelet[2624]: E0306 01:40:07.098906 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:07.419366 systemd-networkd[1245]: flannel.1: Link UP Mar 6 01:40:07.419383 systemd-networkd[1245]: flannel.1: Gained carrier Mar 6 01:40:08.101181 kubelet[2624]: E0306 01:40:08.100994 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:09.048701 systemd-networkd[1245]: flannel.1: Gained IPv6LL Mar 6 01:40:20.702167 kubelet[2624]: E0306 01:40:20.701980 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:20.703062 kubelet[2624]: E0306 01:40:20.702192 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:20.703124 containerd[1547]: time="2026-03-06T01:40:20.702919131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n97nb,Uid:12d874e2-2b61-4bdf-8f50-f7a6d124e83b,Namespace:kube-system,Attempt:0,}" Mar 6 01:40:20.703639 containerd[1547]: time="2026-03-06T01:40:20.703469808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-njtt2,Uid:505a1d29-0e09-459a-891a-31d549d357df,Namespace:kube-system,Attempt:0,}" Mar 6 01:40:20.768481 systemd-networkd[1245]: cni0: Link UP Mar 6 01:40:20.768495 systemd-networkd[1245]: cni0: Gained carrier Mar 6 01:40:20.778443 systemd-networkd[1245]: cni0: Lost carrier Mar 6 01:40:20.782806 systemd-networkd[1245]: vethed4a688d: Link UP Mar 6 01:40:20.790112 kernel: cni0: port 1(vethed4a688d) entered blocking state Mar 6 01:40:20.790202 kernel: cni0: port 1(vethed4a688d) entered disabled state Mar 6 01:40:20.792356 kernel: vethed4a688d: entered allmulticast mode Mar 6 01:40:20.797335 kernel: vethed4a688d: entered promiscuous mode Mar 6 01:40:20.805531 kernel: cni0: port 1(vethed4a688d) entered blocking state Mar 6 01:40:20.805590 kernel: cni0: port 1(vethed4a688d) entered forwarding state Mar 6 01:40:20.810364 kernel: cni0: port 1(vethed4a688d) entered disabled state Mar 6 01:40:20.818040 kernel: cni0: port 2(vethd9dd33de) entered blocking state Mar 6 01:40:20.818119 kernel: cni0: port 2(vethd9dd33de) entered disabled state Mar 6 01:40:20.818154 kernel: vethd9dd33de: entered allmulticast mode Mar 6 01:40:20.824053 kernel: vethd9dd33de: entered promiscuous mode Mar 6 01:40:20.827204 kernel: cni0: port 2(vethd9dd33de) entered blocking state Mar 6 01:40:20.827342 kernel: cni0: port 2(vethd9dd33de) entered forwarding state Mar 6 01:40:20.835718 kernel: cni0: port 2(vethd9dd33de) entered disabled state Mar 6 01:40:20.838471 systemd-networkd[1245]: vethd9dd33de: Link UP Mar 6 01:40:20.857370 kernel: cni0: port 1(vethed4a688d) entered blocking state Mar 6 01:40:20.857578 kernel: cni0: port 1(vethed4a688d) entered forwarding state Mar 6 01:40:20.858095 systemd-networkd[1245]: vethed4a688d: Gained carrier Mar 6 01:40:20.858637 systemd-networkd[1245]: cni0: Gained carrier Mar 6 01:40:20.877601 systemd-networkd[1245]: vethd9dd33de: Gained carrier Mar 6 01:40:20.878747 kernel: cni0: port 2(vethd9dd33de) entered blocking state Mar 6 01:40:20.878794 kernel: cni0: port 2(vethd9dd33de) entered forwarding state Mar 6 01:40:20.881196 containerd[1547]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 6 01:40:20.881196 containerd[1547]: delegateAdd: netconf sent to delegate plugin: Mar 6 01:40:20.908921 containerd[1547]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Mar 6 01:40:20.908921 containerd[1547]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 6 01:40:20.908921 containerd[1547]: delegateAdd: netconf sent to delegate plugin: Mar 6 01:40:20.927009 containerd[1547]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-06T01:40:20.926785797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:40:20.927009 containerd[1547]: time="2026-03-06T01:40:20.926932511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:40:20.929427 containerd[1547]: time="2026-03-06T01:40:20.926962006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:20.930417 containerd[1547]: time="2026-03-06T01:40:20.930359274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:20.969387 containerd[1547]: time="2026-03-06T01:40:20.969014935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:40:20.969387 containerd[1547]: time="2026-03-06T01:40:20.969087360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:40:20.969387 containerd[1547]: time="2026-03-06T01:40:20.969179261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:20.969597 containerd[1547]: time="2026-03-06T01:40:20.969463321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:40:20.999399 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:40:21.034361 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:40:21.047689 containerd[1547]: time="2026-03-06T01:40:21.047591395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n97nb,Uid:12d874e2-2b61-4bdf-8f50-f7a6d124e83b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2102731ec299f2f35b84c734755ca90f7e1d203dd1222f8ec202f36e6c70b9\"" Mar 6 01:40:21.050351 kubelet[2624]: E0306 01:40:21.050298 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:21.056927 containerd[1547]: time="2026-03-06T01:40:21.056479349Z" level=info msg="CreateContainer within sandbox \"4f2102731ec299f2f35b84c734755ca90f7e1d203dd1222f8ec202f36e6c70b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:40:21.083411 containerd[1547]: time="2026-03-06T01:40:21.083360559Z" level=info msg="CreateContainer within sandbox \"4f2102731ec299f2f35b84c734755ca90f7e1d203dd1222f8ec202f36e6c70b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5152ceecf73990df61e8fd6e49b8cb6c44379c18bb0851266b9cdd9ee799dfb3\"" Mar 6 01:40:21.083531 containerd[1547]: time="2026-03-06T01:40:21.083473673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-njtt2,Uid:505a1d29-0e09-459a-891a-31d549d357df,Namespace:kube-system,Attempt:0,} returns sandbox id \"459a2ee164e0f33927192e10d7df00d594f9bff489f7ac8bf19a140420ebae8e\"" Mar 6 01:40:21.084620 containerd[1547]: time="2026-03-06T01:40:21.084538730Z" level=info msg="StartContainer for \"5152ceecf73990df61e8fd6e49b8cb6c44379c18bb0851266b9cdd9ee799dfb3\"" Mar 6 01:40:21.084949 kubelet[2624]: E0306 01:40:21.084922 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:21.092356 containerd[1547]: time="2026-03-06T01:40:21.092242822Z" level=info msg="CreateContainer within sandbox \"459a2ee164e0f33927192e10d7df00d594f9bff489f7ac8bf19a140420ebae8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:40:21.115806 containerd[1547]: time="2026-03-06T01:40:21.115697633Z" level=info msg="CreateContainer within sandbox \"459a2ee164e0f33927192e10d7df00d594f9bff489f7ac8bf19a140420ebae8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37d7f2ed5342c4a3b20ef33958be99e2ddb7ed7fb811171b5dd6389ae8d3b5c7\"" Mar 6 01:40:21.117580 containerd[1547]: time="2026-03-06T01:40:21.117438072Z" level=info msg="StartContainer for \"37d7f2ed5342c4a3b20ef33958be99e2ddb7ed7fb811171b5dd6389ae8d3b5c7\"" Mar 6 01:40:21.185295 containerd[1547]: time="2026-03-06T01:40:21.185112545Z" level=info msg="StartContainer for \"5152ceecf73990df61e8fd6e49b8cb6c44379c18bb0851266b9cdd9ee799dfb3\" returns successfully" Mar 6 01:40:21.239794 containerd[1547]: time="2026-03-06T01:40:21.239377334Z" level=info msg="StartContainer for \"37d7f2ed5342c4a3b20ef33958be99e2ddb7ed7fb811171b5dd6389ae8d3b5c7\" returns successfully" Mar 6 01:40:22.144874 kubelet[2624]: E0306 01:40:22.144616 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:22.147626 kubelet[2624]: E0306 01:40:22.147400 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:22.160320 kubelet[2624]: I0306 01:40:22.159152 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-fwdt5" podStartSLOduration=17.788440413 podStartE2EDuration="22.15914099s" podCreationTimestamp="2026-03-06 01:40:00 +0000 UTC" firstStartedPulling="2026-03-06 01:40:00.780380398 +0000 UTC m=+7.219938703" lastFinishedPulling="2026-03-06 01:40:05.151080977 +0000 UTC m=+11.590639280" observedRunningTime="2026-03-06 01:40:07.115321924 +0000 UTC m=+13.554880268" watchObservedRunningTime="2026-03-06 01:40:22.15914099 +0000 UTC m=+28.598699294" Mar 6 01:40:22.160320 kubelet[2624]: I0306 01:40:22.159414 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-njtt2" podStartSLOduration=22.159406555 podStartE2EDuration="22.159406555s" podCreationTimestamp="2026-03-06 01:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:40:22.158909026 +0000 UTC m=+28.598467341" watchObservedRunningTime="2026-03-06 01:40:22.159406555 +0000 UTC m=+28.598964870" Mar 6 01:40:22.171798 kubelet[2624]: I0306 01:40:22.171666 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n97nb" podStartSLOduration=22.171651883 podStartE2EDuration="22.171651883s" podCreationTimestamp="2026-03-06 01:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:40:22.171186054 +0000 UTC m=+28.610744388" watchObservedRunningTime="2026-03-06 01:40:22.171651883 +0000 UTC m=+28.611210197" Mar 6 01:40:22.296606 systemd-networkd[1245]: vethed4a688d: Gained IPv6LL Mar 6 01:40:22.744658 systemd-networkd[1245]: vethd9dd33de: Gained IPv6LL Mar 6 01:40:22.808613 systemd-networkd[1245]: cni0: Gained IPv6LL Mar 6 01:40:23.150611 kubelet[2624]: E0306 01:40:23.150491 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:23.150611 kubelet[2624]: E0306 01:40:23.150504 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:24.154675 kubelet[2624]: E0306 01:40:24.154330 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:24.155981 kubelet[2624]: E0306 01:40:24.155877 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:40:40.610686 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:56628.service - OpenSSH per-connection server daemon (10.0.0.1:56628). Mar 6 01:40:40.652224 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:40.654227 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:40.660717 systemd-logind[1525]: New session 6 of user core. Mar 6 01:40:40.671611 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:40:40.835491 sshd[3650]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:40.840760 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:56628.service: Deactivated successfully. Mar 6 01:40:40.843675 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:40:40.844945 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:40:40.848080 systemd-logind[1525]: Removed session 6. Mar 6 01:40:45.847736 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:56948.service - OpenSSH per-connection server daemon (10.0.0.1:56948). Mar 6 01:40:45.890158 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 56948 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:45.892024 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:45.900333 systemd-logind[1525]: New session 7 of user core. Mar 6 01:40:45.907613 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:40:46.035960 sshd[3693]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:46.040397 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:56948.service: Deactivated successfully. Mar 6 01:40:46.047474 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:40:46.047819 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:40:46.050591 systemd-logind[1525]: Removed session 7. Mar 6 01:40:51.048823 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:56952.service - OpenSSH per-connection server daemon (10.0.0.1:56952). Mar 6 01:40:51.093868 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 56952 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:51.095727 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:51.101432 systemd-logind[1525]: New session 8 of user core. Mar 6 01:40:51.115600 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:40:51.237872 sshd[3731]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:51.247685 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:56962.service - OpenSSH per-connection server daemon (10.0.0.1:56962). Mar 6 01:40:51.248417 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:56952.service: Deactivated successfully. Mar 6 01:40:51.252672 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:40:51.253111 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:40:51.256227 systemd-logind[1525]: Removed session 8. Mar 6 01:40:51.280326 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 56962 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:51.282402 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:51.290474 systemd-logind[1525]: New session 9 of user core. Mar 6 01:40:51.297791 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:40:51.472194 sshd[3744]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:51.481991 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Mar 6 01:40:51.483981 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:56962.service: Deactivated successfully. Mar 6 01:40:51.495190 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:40:51.497981 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:40:51.502760 systemd-logind[1525]: Removed session 9. Mar 6 01:40:51.534738 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:51.536667 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:51.543443 systemd-logind[1525]: New session 10 of user core. Mar 6 01:40:51.552800 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:40:51.688372 sshd[3757]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:51.693951 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:56976.service: Deactivated successfully. Mar 6 01:40:51.697029 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:40:51.697055 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:40:51.698667 systemd-logind[1525]: Removed session 10. Mar 6 01:40:56.707633 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:56608.service - OpenSSH per-connection server daemon (10.0.0.1:56608). Mar 6 01:40:56.745380 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 56608 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:40:56.747921 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:40:56.754597 systemd-logind[1525]: New session 11 of user core. Mar 6 01:40:56.767868 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:40:56.897539 sshd[3798]: pam_unix(sshd:session): session closed for user core Mar 6 01:40:56.902099 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:56608.service: Deactivated successfully. Mar 6 01:40:56.905007 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:40:56.905022 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:40:56.906870 systemd-logind[1525]: Removed session 11. Mar 6 01:41:01.910527 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:56614.service - OpenSSH per-connection server daemon (10.0.0.1:56614). Mar 6 01:41:01.942365 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 56614 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:01.944473 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:01.955162 systemd-logind[1525]: New session 12 of user core. Mar 6 01:41:01.961578 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:41:02.084019 sshd[3835]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:02.092538 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:48370.service - OpenSSH per-connection server daemon (10.0.0.1:48370). Mar 6 01:41:02.093116 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:56614.service: Deactivated successfully. Mar 6 01:41:02.096048 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:41:02.096795 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:41:02.098459 systemd-logind[1525]: Removed session 12. Mar 6 01:41:02.127245 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 48370 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:02.129174 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:02.134933 systemd-logind[1525]: New session 13 of user core. Mar 6 01:41:02.144865 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:41:02.383244 sshd[3847]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:02.389489 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:48374.service - OpenSSH per-connection server daemon (10.0.0.1:48374). Mar 6 01:41:02.390061 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:48370.service: Deactivated successfully. Mar 6 01:41:02.394432 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:41:02.395168 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:41:02.396783 systemd-logind[1525]: Removed session 13. Mar 6 01:41:02.425192 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 48374 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:02.427194 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:02.434055 systemd-logind[1525]: New session 14 of user core. Mar 6 01:41:02.445793 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:41:02.946526 sshd[3861]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:02.966129 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:48386.service - OpenSSH per-connection server daemon (10.0.0.1:48386). Mar 6 01:41:02.967587 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:48374.service: Deactivated successfully. Mar 6 01:41:02.973326 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:41:02.980044 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:41:02.982057 systemd-logind[1525]: Removed session 14. Mar 6 01:41:03.001896 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 48386 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:03.003952 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:03.009683 systemd-logind[1525]: New session 15 of user core. Mar 6 01:41:03.019845 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:41:03.249047 sshd[3902]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:03.262692 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:48388.service - OpenSSH per-connection server daemon (10.0.0.1:48388). Mar 6 01:41:03.263757 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:48386.service: Deactivated successfully. Mar 6 01:41:03.268694 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:41:03.270378 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:41:03.272558 systemd-logind[1525]: Removed session 15. Mar 6 01:41:03.305086 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 48388 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:03.307219 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:03.313852 systemd-logind[1525]: New session 16 of user core. Mar 6 01:41:03.324633 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:41:03.476674 sshd[3915]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:03.482301 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:48388.service: Deactivated successfully. Mar 6 01:41:03.485598 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:41:03.485722 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:41:03.487641 systemd-logind[1525]: Removed session 16. Mar 6 01:41:08.489743 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:48396.service - OpenSSH per-connection server daemon (10.0.0.1:48396). Mar 6 01:41:08.530223 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 48396 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:08.532954 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:08.540149 systemd-logind[1525]: New session 17 of user core. Mar 6 01:41:08.571245 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:41:08.703196 kubelet[2624]: E0306 01:41:08.701573 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:41:08.727622 sshd[3954]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:08.733963 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:48396.service: Deactivated successfully. Mar 6 01:41:08.737642 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:41:08.737900 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:41:08.740353 systemd-logind[1525]: Removed session 17. Mar 6 01:41:13.701904 kubelet[2624]: E0306 01:41:13.701655 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:41:13.737520 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Mar 6 01:41:13.769659 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:13.771458 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:13.777005 systemd-logind[1525]: New session 18 of user core. Mar 6 01:41:13.789760 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:41:13.898427 sshd[3991]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:13.902641 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:54626.service: Deactivated successfully. Mar 6 01:41:13.907815 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:41:13.907884 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:41:13.909756 systemd-logind[1525]: Removed session 18. Mar 6 01:41:18.701322 kubelet[2624]: E0306 01:41:18.701002 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:41:18.916645 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:54630.service - OpenSSH per-connection server daemon (10.0.0.1:54630). Mar 6 01:41:18.949964 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 54630 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:18.951919 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:18.958071 systemd-logind[1525]: New session 19 of user core. Mar 6 01:41:18.962563 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:41:19.098073 sshd[4026]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:19.103712 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:54630.service: Deactivated successfully. Mar 6 01:41:19.107333 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:41:19.107426 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:41:19.109147 systemd-logind[1525]: Removed session 19. Mar 6 01:41:21.701536 kubelet[2624]: E0306 01:41:21.701456 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:41:24.112509 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Mar 6 01:41:24.150516 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:41:24.153049 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:24.159653 systemd-logind[1525]: New session 20 of user core. Mar 6 01:41:24.167029 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:41:24.291401 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:24.296485 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:36176.service: Deactivated successfully. Mar 6 01:41:24.299664 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:41:24.299851 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:41:24.301832 systemd-logind[1525]: Removed session 20.