Mar 2 13:05:29.301221 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:05:29.301249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:05:29.301265 kernel: BIOS-provided physical RAM map: Mar 2 13:05:29.301275 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 13:05:29.301968 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 13:05:29.301984 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 13:05:29.301995 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 13:05:29.302003 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 13:05:29.302010 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:05:29.302025 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 13:05:29.302032 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:05:29.302042 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 13:05:29.302739 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:05:29.302754 kernel: NX (Execute Disable) protection: active Mar 2 13:05:29.302767 kernel: APIC: Static calls initialized Mar 2 13:05:29.302997 kernel: SMBIOS 2.8 present. Mar 2 13:05:29.303010 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 13:05:29.303019 kernel: Hypervisor detected: KVM Mar 2 13:05:29.303027 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:05:29.303035 kernel: kvm-clock: using sched offset of 10932648457 cycles Mar 2 13:05:29.303044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:05:29.303053 kernel: tsc: Detected 2445.426 MHz processor Mar 2 13:05:29.303062 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:05:29.303071 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:05:29.303085 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 13:05:29.303094 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 13:05:29.303103 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:05:29.303112 kernel: Using GB pages for direct mapping Mar 2 13:05:29.303123 kernel: ACPI: Early table checksum verification disabled Mar 2 13:05:29.303134 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 13:05:29.303143 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303152 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303161 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303175 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 13:05:29.303184 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303193 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303203 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303212 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:05:29.303221 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 13:05:29.303231 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 13:05:29.303247 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 13:05:29.303260 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 13:05:29.303269 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 13:05:29.303279 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 13:05:29.303965 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 13:05:29.303978 kernel: No NUMA configuration found Mar 2 13:05:29.303987 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 13:05:29.304004 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 13:05:29.304017 kernel: Zone ranges: Mar 2 13:05:29.304026 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:05:29.304035 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 13:05:29.304045 kernel: Normal empty Mar 2 13:05:29.304054 kernel: Movable zone start for each node Mar 2 13:05:29.304063 kernel: Early memory node ranges Mar 2 13:05:29.304072 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 13:05:29.304081 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 13:05:29.304090 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 13:05:29.304104 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:05:29.304631 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 13:05:29.304646 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 13:05:29.304656 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:05:29.304666 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:05:29.304675 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:05:29.304684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:05:29.304694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:05:29.304703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:05:29.304722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:05:29.304732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:05:29.304741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:05:29.304750 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:05:29.304759 kernel: TSC deadline timer available Mar 2 13:05:29.304768 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:05:29.304778 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:05:29.304787 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:05:29.305098 kernel: kvm-guest: setup PV sched yield Mar 2 13:05:29.305115 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 13:05:29.305124 kernel: Booting paravirtualized kernel on KVM Mar 2 13:05:29.305134 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:05:29.306264 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:05:29.306280 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:05:29.306698 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:05:29.306708 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:05:29.306722 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:05:29.306732 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:05:29.306752 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:05:29.306761 kernel: random: crng init done Mar 2 13:05:29.306770 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:05:29.306780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:05:29.306789 kernel: Fallback order for Node 0: 0 Mar 2 13:05:29.306799 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 13:05:29.306810 kernel: Policy zone: DMA32 Mar 2 13:05:29.306821 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:05:29.306835 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 2 13:05:29.306845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:05:29.306854 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:05:29.306863 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:05:29.306872 kernel: Dynamic Preempt: voluntary Mar 2 13:05:29.306882 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:05:29.306893 kernel: rcu: RCU event tracing is enabled. Mar 2 13:05:29.306903 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:05:29.306913 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:05:29.306927 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:05:29.306936 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:05:29.306946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:05:29.306957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:05:29.318209 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:05:29.318223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:05:29.318233 kernel: Console: colour VGA+ 80x25 Mar 2 13:05:29.318242 kernel: printk: console [ttyS0] enabled Mar 2 13:05:29.318251 kernel: ACPI: Core revision 20230628 Mar 2 13:05:29.318266 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:05:29.318276 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:05:29.320915 kernel: x2apic enabled Mar 2 13:05:29.320931 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:05:29.320943 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:05:29.320955 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:05:29.320968 kernel: kvm-guest: setup PV IPIs Mar 2 13:05:29.320983 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:05:29.321018 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:05:29.321032 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 13:05:29.321042 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:05:29.321056 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:05:29.321073 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:05:29.321087 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:05:29.321101 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:05:29.321113 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:05:29.321125 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:05:29.321147 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:05:29.321207 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:05:29.321225 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:05:29.321239 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:05:29.321251 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:05:29.321266 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:05:29.321277 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:05:29.321378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:05:29.321401 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:05:29.321413 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:05:29.321425 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:05:29.321436 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:05:29.321500 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:05:29.321512 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:05:29.321523 kernel: landlock: Up and running. Mar 2 13:05:29.321535 kernel: SELinux: Initializing. Mar 2 13:05:29.321546 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:05:29.321564 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:05:29.321576 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:05:29.321587 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:05:29.321599 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:05:29.321611 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:05:29.321623 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:05:29.321634 kernel: signal: max sigframe size: 1776 Mar 2 13:05:29.321674 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:05:29.321687 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:05:29.321703 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:05:29.321715 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:05:29.321726 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:05:29.321737 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:05:29.321749 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:05:29.321761 kernel: smpboot: Max logical packages: 1 Mar 2 13:05:29.321775 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 13:05:29.321788 kernel: devtmpfs: initialized Mar 2 13:05:29.321798 kernel: x86/mm: Memory block size: 128MB Mar 2 13:05:29.321817 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:05:29.321828 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:05:29.321842 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:05:29.321855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:05:29.321867 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:05:29.321881 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:05:29.321895 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:05:29.321909 kernel: audit: type=2000 audit(1772456722.886:1): state=initialized audit_enabled=0 res=1 Mar 2 13:05:29.321919 kernel: cpuidle: using governor menu Mar 2 13:05:29.321937 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:05:29.321949 kernel: dca service started, version 1.12.1 Mar 2 13:05:29.321963 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:05:29.321974 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:05:29.321988 kernel: PCI: Using configuration type 1 for base access Mar 2 13:05:29.322001 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:05:29.322013 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:05:29.322027 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:05:29.322038 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:05:29.322057 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:05:29.322068 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:05:29.322080 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:05:29.322093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:05:29.322107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:05:29.322118 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:05:29.322129 kernel: ACPI: Interpreter enabled Mar 2 13:05:29.322143 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:05:29.322157 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:05:29.322175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:05:29.322190 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:05:29.322201 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:05:29.322213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:05:29.322938 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:05:29.323121 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:05:29.323278 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:05:29.323355 kernel: PCI host bridge to bus 0000:00 Mar 2 13:05:29.323649 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:05:29.323795 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:05:29.323933 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:05:29.324068 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:05:29.324203 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:05:29.324439 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 13:05:29.324627 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:05:29.324887 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:05:29.325101 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:05:29.325253 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 13:05:29.325536 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 13:05:29.325690 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 13:05:29.325838 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:05:29.326106 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:05:29.326502 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 13:05:29.326684 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 13:05:29.326920 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 13:05:29.327267 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:05:29.327622 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 13:05:29.327861 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 13:05:29.328088 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 13:05:29.328562 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:05:29.328801 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 13:05:29.329016 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 13:05:29.329237 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 13:05:29.329575 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 13:05:29.329793 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:05:29.330014 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:05:29.330243 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 12695 usecs Mar 2 13:05:29.330657 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:05:29.330874 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 13:05:29.331094 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 13:05:29.331577 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:05:29.331810 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 13:05:29.331829 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:05:29.331842 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:05:29.331854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:05:29.331865 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:05:29.331877 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:05:29.331889 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:05:29.331901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:05:29.331919 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:05:29.331931 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:05:29.331944 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:05:29.331957 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:05:29.331969 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:05:29.331979 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:05:29.331993 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:05:29.332005 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:05:29.332016 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:05:29.332034 kernel: iommu: Default domain type: Translated Mar 2 13:05:29.332046 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:05:29.332060 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:05:29.332070 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:05:29.332083 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 13:05:29.332095 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 13:05:29.332388 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:05:29.332662 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:05:29.333007 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:05:29.333035 kernel: vgaarb: loaded Mar 2 13:05:29.333049 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:05:29.333061 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:05:29.333074 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:05:29.333085 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:05:29.333098 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:05:29.333109 kernel: pnp: PnP ACPI init Mar 2 13:05:29.333603 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:05:29.333634 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:05:29.333649 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:05:29.333660 kernel: NET: Registered PF_INET protocol family Mar 2 13:05:29.333672 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:05:29.333686 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:05:29.333698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:05:29.333710 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:05:29.333723 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:05:29.333735 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:05:29.333754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:05:29.333765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:05:29.333776 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:05:29.333788 kernel: NET: Registered PF_XDP protocol family Mar 2 13:05:29.334003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:05:29.334202 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:05:29.334548 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:05:29.334758 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:05:29.334971 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:05:29.335169 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 13:05:29.335186 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:05:29.335198 kernel: Initialise system trusted keyrings Mar 2 13:05:29.335211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:05:29.335222 kernel: Key type asymmetric registered Mar 2 13:05:29.335233 kernel: Asymmetric key parser 'x509' registered Mar 2 13:05:29.335244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:05:29.335256 kernel: io scheduler mq-deadline registered Mar 2 13:05:29.335273 kernel: io scheduler kyber registered Mar 2 13:05:29.335357 kernel: io scheduler bfq registered Mar 2 13:05:29.335372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:05:29.335385 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:05:29.335399 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:05:29.335409 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:05:29.335422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:05:29.335433 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:05:29.335485 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:05:29.335503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:05:29.335515 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:05:29.335846 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:05:29.336034 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:05:29.336211 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:05:27 UTC (1772456727) Mar 2 13:05:29.336523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:05:29.336541 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:05:29.336552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:05:29.336572 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:05:29.336583 kernel: Segment Routing with IPv6 Mar 2 13:05:29.336595 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:05:29.336606 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:05:29.336616 kernel: Key type dns_resolver registered Mar 2 13:05:29.336628 kernel: IPI shorthand broadcast: enabled Mar 2 13:05:29.336641 kernel: sched_clock: Marking stable (4510092775, 704849707)->(6098494197, -883551715) Mar 2 13:05:29.336652 kernel: registered taskstats version 1 Mar 2 13:05:29.336663 kernel: Loading compiled-in X.509 certificates Mar 2 13:05:29.336679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:05:29.336691 kernel: Key type .fscrypt registered Mar 2 13:05:29.336701 kernel: Key type fscrypt-provisioning registered Mar 2 13:05:29.336712 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:05:29.336723 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:05:29.336734 kernel: ima: No architecture policies found Mar 2 13:05:29.336746 kernel: clk: Disabling unused clocks Mar 2 13:05:29.336758 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:05:29.336770 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:05:29.336788 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:05:29.336800 kernel: Run /init as init process Mar 2 13:05:29.336811 kernel: with arguments: Mar 2 13:05:29.336823 kernel: /init Mar 2 13:05:29.336835 kernel: with environment: Mar 2 13:05:29.336846 kernel: HOME=/ Mar 2 13:05:29.336857 kernel: TERM=linux Mar 2 13:05:29.336872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:05:29.336899 systemd[1]: Detected virtualization kvm. Mar 2 13:05:29.336912 systemd[1]: Detected architecture x86-64. Mar 2 13:05:29.336924 systemd[1]: Running in initrd. Mar 2 13:05:29.336936 systemd[1]: No hostname configured, using default hostname. Mar 2 13:05:29.336948 systemd[1]: Hostname set to . Mar 2 13:05:29.336961 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:05:29.336973 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:05:29.336986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:05:29.337004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:05:29.337018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:05:29.337030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:05:29.337043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:05:29.337056 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:05:29.337071 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:05:29.337168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:05:29.337185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:05:29.337198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:05:29.337210 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:05:29.337224 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:05:29.337259 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:05:29.337276 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:05:29.337375 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:05:29.337391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:05:29.337404 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:05:29.337418 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:05:29.337433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:05:29.337494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:05:29.337509 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:05:29.337521 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:05:29.337541 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:05:29.337553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:05:29.337568 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:05:29.337579 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:05:29.337594 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:05:29.337607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:05:29.337619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:05:29.337638 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:05:29.337687 systemd-journald[194]: Collecting audit messages is disabled. Mar 2 13:05:29.337721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:05:29.337735 systemd-journald[194]: Journal started Mar 2 13:05:29.337764 systemd-journald[194]: Runtime Journal (/run/log/journal/08ff4b6d74fd42859dae87e173279043) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:05:29.337823 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:05:29.359397 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:05:29.380786 systemd-modules-load[195]: Inserted module 'overlay' Mar 2 13:05:29.685926 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:05:29.685970 kernel: Bridge firewalling registered Mar 2 13:05:29.390014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:05:29.487105 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 2 13:05:29.691778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:05:29.710637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:05:29.767025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:05:29.776249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:05:29.857578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:05:29.892248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:05:29.898155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:05:29.946210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:05:29.983088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:05:30.037167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:05:30.037902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:05:30.077146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:05:30.132844 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:05:30.174008 systemd-resolved[223]: Positive Trust Anchors: Mar 2 13:05:30.175175 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:05:30.176497 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:05:30.185213 systemd-resolved[223]: Defaulting to hostname 'linux'. Mar 2 13:05:30.282252 dracut-cmdline[232]: dracut-dracut-053 Mar 2 13:05:30.282252 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:05:30.189195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:05:30.269423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:05:30.574783 kernel: SCSI subsystem initialized Mar 2 13:05:30.591170 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:05:30.662912 kernel: iscsi: registered transport (tcp) Mar 2 13:05:30.730192 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:05:30.730745 kernel: QLogic iSCSI HBA Driver Mar 2 13:05:30.877433 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:05:30.916011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:05:30.997004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:05:30.997098 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:05:31.023047 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:05:31.144613 kernel: raid6: avx2x4 gen() 12458 MB/s Mar 2 13:05:31.164418 kernel: raid6: avx2x2 gen() 15298 MB/s Mar 2 13:05:31.186922 kernel: raid6: avx2x1 gen() 12123 MB/s Mar 2 13:05:31.187029 kernel: raid6: using algorithm avx2x2 gen() 15298 MB/s Mar 2 13:05:31.227670 kernel: raid6: .... xor() 16122 MB/s, rmw enabled Mar 2 13:05:31.227765 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:05:31.276273 kernel: xor: automatically using best checksumming function avx Mar 2 13:05:31.700692 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:05:31.752775 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:05:31.778914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:05:31.851824 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 2 13:05:31.867335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:05:31.897885 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:05:31.952797 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 2 13:05:32.057612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:05:32.073777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:05:32.289901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:05:32.328879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:05:32.376861 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:05:32.384931 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:05:32.400377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:05:32.418745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:05:32.444014 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:05:32.470430 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:05:32.472237 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:05:32.484148 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:05:32.505693 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:05:32.520089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:05:32.520273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:05:32.530915 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:05:32.546068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:05:32.546681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:05:32.563209 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:05:32.618253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:05:32.664992 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:05:32.665051 kernel: GPT:9289727 != 19775487 Mar 2 13:05:32.669257 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:05:32.669368 kernel: GPT:9289727 != 19775487 Mar 2 13:05:32.674593 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:05:32.678411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:05:32.761370 kernel: libata version 3.00 loaded. Mar 2 13:05:32.823178 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:05:32.828548 kernel: AES CTR mode by8 optimization enabled Mar 2 13:05:33.683667 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:05:33.687919 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:05:33.687945 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Mar 2 13:05:33.691361 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:05:33.691737 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:05:33.697364 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (482) Mar 2 13:05:33.729236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:05:33.992821 kernel: scsi host0: ahci Mar 2 13:05:33.993143 kernel: scsi host1: ahci Mar 2 13:05:33.993460 kernel: scsi host2: ahci Mar 2 13:05:33.993706 kernel: scsi host3: ahci Mar 2 13:05:33.993975 kernel: scsi host4: ahci Mar 2 13:05:33.994238 kernel: scsi host5: ahci Mar 2 13:05:33.994645 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 2 13:05:33.994669 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 2 13:05:33.994689 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 2 13:05:33.994708 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 2 13:05:33.994725 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 2 13:05:33.994757 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 2 13:05:33.991354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:05:34.011681 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:05:34.046012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:05:34.081262 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:05:34.081381 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:05:34.068484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:05:34.091724 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:05:34.087339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:05:34.150096 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:05:34.150146 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:05:34.150164 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:05:34.150181 kernel: ata3.00: applying bridge limits Mar 2 13:05:34.150199 kernel: ata3.00: configured for UDMA/100 Mar 2 13:05:34.150216 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:05:34.157372 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:05:34.163915 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:05:34.184431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:05:34.198465 disk-uuid[568]: Primary Header is updated. Mar 2 13:05:34.198465 disk-uuid[568]: Secondary Entries is updated. Mar 2 13:05:34.198465 disk-uuid[568]: Secondary Header is updated. Mar 2 13:05:34.233671 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:05:34.260397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:05:34.262122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:05:34.297272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:05:34.356744 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:05:34.359233 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:05:34.379130 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:05:35.282676 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:05:35.290474 disk-uuid[570]: The operation has completed successfully. Mar 2 13:05:35.328862 kernel: hrtimer: interrupt took 12442054 ns Mar 2 13:05:35.535465 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:05:35.535758 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:05:35.556013 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:05:35.568675 sh[595]: Success Mar 2 13:05:35.620672 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:05:35.715789 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:05:35.742660 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:05:35.757018 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:05:35.800623 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:05:35.800842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:05:35.800870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:05:35.818908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:05:35.823236 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:05:35.868991 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:05:35.879662 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:05:35.913733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:05:35.929234 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:05:35.961233 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:05:35.961449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:05:35.961467 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:05:35.994626 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:05:36.041656 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:05:36.052166 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:05:36.069547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:05:36.090081 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:05:36.735176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:05:36.741484 ignition[697]: Ignition 2.19.0 Mar 2 13:05:36.741499 ignition[697]: Stage: fetch-offline Mar 2 13:05:36.741761 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:36.941079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:05:36.741783 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:36.742155 ignition[697]: parsed url from cmdline: "" Mar 2 13:05:36.742163 ignition[697]: no config URL provided Mar 2 13:05:36.742175 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:05:36.742195 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:05:36.742343 ignition[697]: op(1): [started] loading QEMU firmware config module Mar 2 13:05:36.742350 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:05:37.072256 ignition[697]: op(1): [finished] loading QEMU firmware config module Mar 2 13:05:37.231563 systemd-networkd[783]: lo: Link UP Mar 2 13:05:37.231648 systemd-networkd[783]: lo: Gained carrier Mar 2 13:05:37.235138 systemd-networkd[783]: Enumeration completed Mar 2 13:05:37.236465 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:05:37.236932 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:05:37.236939 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:05:37.242597 systemd-networkd[783]: eth0: Link UP Mar 2 13:05:37.242646 systemd-networkd[783]: eth0: Gained carrier Mar 2 13:05:37.242702 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:05:37.282567 ignition[697]: parsing config with SHA512: 47e0b94c1c47208b57bf1709a441e4b0d72f800c66a8e26f92a32506e4e69ac05e985029863417cd8c6ede588cb6dcebd59b34ea34f629516d11f48916c9f9ab Mar 2 13:05:37.243813 systemd[1]: Reached target network.target - Network. Mar 2 13:05:37.296073 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:05:37.346995 unknown[697]: fetched base config from "system" Mar 2 13:05:37.347018 unknown[697]: fetched user config from "qemu" Mar 2 13:05:37.730205 ignition[697]: fetch-offline: fetch-offline passed Mar 2 13:05:37.733095 ignition[697]: Ignition finished successfully Mar 2 13:05:37.740014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:05:37.742682 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:05:37.756133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:05:37.853866 ignition[787]: Ignition 2.19.0 Mar 2 13:05:37.854814 ignition[787]: Stage: kargs Mar 2 13:05:37.861430 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:37.861508 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:37.875172 ignition[787]: kargs: kargs passed Mar 2 13:05:37.875416 ignition[787]: Ignition finished successfully Mar 2 13:05:37.884007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:05:37.937785 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:05:38.113548 ignition[795]: Ignition 2.19.0 Mar 2 13:05:38.113592 ignition[795]: Stage: disks Mar 2 13:05:38.113980 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:38.113995 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:38.115789 ignition[795]: disks: disks passed Mar 2 13:05:38.115870 ignition[795]: Ignition finished successfully Mar 2 13:05:38.135964 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:05:38.137490 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:05:38.151941 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:05:38.152112 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:05:38.173955 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:05:38.189514 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:05:38.432451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:05:38.479014 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:05:38.489935 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:05:38.540201 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:05:38.921057 systemd-networkd[783]: eth0: Gained IPv6LL Mar 2 13:05:39.098452 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:05:39.100263 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:05:39.118761 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:05:39.142583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:05:39.151694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:05:39.157970 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:05:39.158058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:05:39.249776 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 2 13:05:39.249829 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:05:39.249849 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:05:39.249868 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:05:39.158102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:05:39.276841 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:05:39.279036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:05:39.330728 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:05:39.347816 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:05:39.491254 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:05:39.514388 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:05:39.531352 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:05:39.540803 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:05:39.774116 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:05:39.787506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:05:39.794336 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:05:39.814931 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:05:39.823501 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:05:39.849865 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:05:39.886054 ignition[929]: INFO : Ignition 2.19.0 Mar 2 13:05:39.886054 ignition[929]: INFO : Stage: mount Mar 2 13:05:39.894155 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:39.894155 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:39.894155 ignition[929]: INFO : mount: mount passed Mar 2 13:05:39.894155 ignition[929]: INFO : Ignition finished successfully Mar 2 13:05:39.911738 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:05:39.930786 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:05:40.110161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:05:40.136643 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 2 13:05:40.136849 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:05:40.145588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:05:40.145765 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:05:40.167597 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:05:40.172413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:05:40.225734 ignition[960]: INFO : Ignition 2.19.0 Mar 2 13:05:40.225734 ignition[960]: INFO : Stage: files Mar 2 13:05:40.238117 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:40.238117 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:40.238117 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:05:40.238117 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:05:40.238117 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:05:40.300984 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:05:40.300984 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:05:40.300984 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:05:40.300984 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:05:40.300984 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:05:40.276900 unknown[960]: wrote ssh authorized keys file for user: core Mar 2 13:05:40.415643 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:05:41.058530 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:05:41.058530 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:05:41.077404 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:05:41.088526 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:05:41.096793 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:05:41.116884 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:05:41.116884 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:05:41.133021 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:05:41.133021 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:05:41.150630 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:05:41.160002 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:05:41.168478 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:05:41.180170 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:05:41.192186 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:05:41.214822 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 13:05:41.640477 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:05:44.396680 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:05:44.396680 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 13:05:44.426595 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:05:44.578527 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:05:44.592363 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:05:44.592363 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:05:44.592363 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:05:44.592363 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:05:44.592363 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:05:44.592363 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:05:44.592363 ignition[960]: INFO : files: files passed Mar 2 13:05:44.592363 ignition[960]: INFO : Ignition finished successfully Mar 2 13:05:44.600820 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:05:44.634920 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:05:44.652790 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:05:44.690099 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:05:44.697400 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:05:44.697400 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:05:44.691663 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:05:44.738693 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:05:44.709473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:05:44.774004 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:05:44.795665 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:05:44.798200 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:05:44.992617 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:05:45.013950 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:05:45.042725 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:05:45.052504 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:05:45.060679 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:05:45.142584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:05:45.522585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:05:45.556219 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:05:45.643503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:05:45.663576 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:05:45.680853 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:05:45.685820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:05:45.686030 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:05:45.721394 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:05:45.736398 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:05:45.745155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:05:45.751985 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:05:45.767472 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:05:45.778459 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:05:45.791519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:05:45.823107 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:05:45.833440 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:05:45.844828 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:05:45.852711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:05:45.858829 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:05:45.875988 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:05:45.889871 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:05:45.895899 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:05:45.907738 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:05:45.932062 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:05:45.932370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:05:45.963071 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:05:45.970642 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:05:45.981004 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:05:45.989500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:05:45.996146 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:05:46.021270 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:05:46.029864 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:05:46.039045 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:05:46.047109 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:05:46.056696 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:05:46.056965 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:05:46.070158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:05:46.075701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:05:46.094401 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:05:46.094665 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:05:46.136229 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:05:46.152412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:05:46.156701 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:05:46.160539 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:05:46.177688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:05:46.177973 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:05:46.214751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:05:46.219365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:05:46.231502 ignition[1015]: INFO : Ignition 2.19.0 Mar 2 13:05:46.231502 ignition[1015]: INFO : Stage: umount Mar 2 13:05:46.231502 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:05:46.231502 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:05:46.231502 ignition[1015]: INFO : umount: umount passed Mar 2 13:05:46.231502 ignition[1015]: INFO : Ignition finished successfully Mar 2 13:05:46.235633 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:05:46.253644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:05:46.278596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:05:46.284232 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:05:46.292469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:05:46.311634 systemd[1]: Stopped target network.target - Network. Mar 2 13:05:46.321110 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:05:46.321259 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:05:46.335964 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:05:46.338910 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:05:46.348613 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:05:46.348726 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:05:46.371388 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:05:46.371525 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:05:46.382443 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:05:46.382562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:05:46.393626 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:05:46.411918 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:05:46.430139 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:05:46.430541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:05:46.431098 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 2 13:05:46.445946 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:05:46.446738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:05:46.454457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:05:46.454585 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:05:46.488747 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:05:46.494528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:05:46.494674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:05:46.512844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:05:46.512987 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:05:46.525007 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:05:46.525148 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:05:46.535279 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:05:46.535537 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:05:46.545458 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:05:46.573537 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:05:46.574351 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:05:46.583231 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:05:46.583487 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:05:46.596226 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:05:46.596407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:05:46.610942 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:05:46.611042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:05:46.627065 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:05:46.627185 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:05:46.635031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:05:46.635985 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:05:46.653718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:05:46.653903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:05:46.691942 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:05:46.714088 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:05:46.714199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:05:46.722662 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:05:46.722753 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:05:46.733868 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:05:46.733970 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:05:46.741615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:05:46.741718 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:05:46.753119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:05:46.753669 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:05:46.762736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:05:46.818738 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:05:46.838649 systemd[1]: Switching root. Mar 2 13:05:46.915871 systemd-journald[194]: Journal stopped Mar 2 13:05:50.727650 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 2 13:05:50.727761 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:05:50.727791 kernel: SELinux: policy capability open_perms=1 Mar 2 13:05:50.727810 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:05:50.727835 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:05:50.727911 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:05:50.727935 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:05:50.727953 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:05:50.727970 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:05:50.727987 kernel: audit: type=1403 audit(1772456747.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:05:50.728016 systemd[1]: Successfully loaded SELinux policy in 111.531ms. Mar 2 13:05:50.728045 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.144ms. Mar 2 13:05:50.728064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:05:50.728083 systemd[1]: Detected virtualization kvm. Mar 2 13:05:50.728101 systemd[1]: Detected architecture x86-64. Mar 2 13:05:50.728124 systemd[1]: Detected first boot. Mar 2 13:05:50.728142 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:05:50.728162 zram_generator::config[1057]: No configuration found. Mar 2 13:05:50.728186 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:05:50.728205 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:05:50.728223 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:05:50.728253 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:05:50.728276 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:05:50.728382 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:05:50.728406 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:05:50.728426 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:05:50.728447 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:05:50.728473 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:05:50.728495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:05:50.728514 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:05:50.728532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:05:50.728550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:05:50.728569 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:05:50.728594 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:05:50.728613 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:05:50.728637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:05:50.728658 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:05:50.728676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:05:50.728695 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:05:50.728713 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:05:50.728732 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:05:50.728751 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:05:50.728769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:05:50.728792 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:05:50.728809 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:05:50.728827 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:05:50.728903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:05:50.728927 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:05:50.728949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:05:50.728969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:05:50.728988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:05:50.729007 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:05:50.729026 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:05:50.729054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:05:50.729073 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:05:50.729093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:50.729111 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:05:50.729136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:05:50.729157 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:05:50.729178 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:05:50.729198 systemd[1]: Reached target machines.target - Containers. Mar 2 13:05:50.729224 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:05:50.729243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:05:50.729264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:05:50.729351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:05:50.729378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:05:50.729397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:05:50.729416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:05:50.729435 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:05:50.729462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:05:50.729482 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:05:50.729501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:05:50.729519 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:05:50.729536 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:05:50.729553 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:05:50.729570 kernel: fuse: init (API version 7.39) Mar 2 13:05:50.729587 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:05:50.729605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:05:50.729630 kernel: loop: module loaded Mar 2 13:05:50.729648 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:05:50.729667 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:05:50.729684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:05:50.729703 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:05:50.729723 systemd[1]: Stopped verity-setup.service. Mar 2 13:05:50.729742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:50.729761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:05:50.729778 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:05:50.729803 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:05:50.729939 systemd-journald[1141]: Collecting audit messages is disabled. Mar 2 13:05:50.729983 systemd-journald[1141]: Journal started Mar 2 13:05:50.730017 systemd-journald[1141]: Runtime Journal (/run/log/journal/08ff4b6d74fd42859dae87e173279043) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:05:49.056150 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:05:49.130339 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:05:49.132983 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:05:49.135711 systemd[1]: systemd-journald.service: Consumed 2.218s CPU time. Mar 2 13:05:50.757003 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:05:50.757518 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:05:50.765728 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:05:50.777724 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:05:50.793535 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:05:50.809675 kernel: ACPI: bus type drm_connector registered Mar 2 13:05:50.827342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:05:50.836371 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:05:50.838144 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:05:50.845790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:05:50.846145 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:05:50.853499 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:05:50.854440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:05:50.860544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:05:50.861076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:05:50.870123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:05:50.870900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:05:50.880180 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:05:50.881083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:05:50.887638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:05:50.893367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:05:50.900016 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:05:51.139148 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:05:51.166565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:05:51.181781 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:05:51.191584 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:05:51.191663 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:05:51.195575 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:05:51.212149 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:05:51.224714 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:05:51.235743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:05:51.241403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:05:51.250054 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:05:51.256157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:05:51.258795 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:05:51.275974 systemd-journald[1141]: Time spent on flushing to /var/log/journal/08ff4b6d74fd42859dae87e173279043 is 118.091ms for 942 entries. Mar 2 13:05:51.275974 systemd-journald[1141]: System Journal (/var/log/journal/08ff4b6d74fd42859dae87e173279043) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:05:51.525095 systemd-journald[1141]: Received client request to flush runtime journal. Mar 2 13:05:51.525163 kernel: loop0: detected capacity change from 0 to 219192 Mar 2 13:05:51.271749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:05:51.286136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:05:51.337775 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:05:51.368242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:05:51.421449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:05:51.436605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:05:51.453540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:05:51.460495 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:05:51.467427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:05:51.488704 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:05:51.522180 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:05:51.550703 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:05:51.562818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:05:51.761758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:05:51.794661 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:05:51.816481 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 2 13:05:51.817434 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 2 13:05:51.822928 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:05:51.829754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:05:51.831688 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:05:51.843962 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:05:51.885834 kernel: loop1: detected capacity change from 0 to 142488 Mar 2 13:05:51.871087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:05:52.188430 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 13:05:52.421028 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:05:52.446493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:05:52.878648 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 13:05:52.878681 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 13:05:52.890462 kernel: loop3: detected capacity change from 0 to 219192 Mar 2 13:05:52.898803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:05:53.311439 kernel: loop4: detected capacity change from 0 to 142488 Mar 2 13:05:53.386398 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 13:05:53.537240 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:05:53.540822 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 2 13:05:53.621569 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:05:53.622230 systemd[1]: Reloading... Mar 2 13:05:54.479067 zram_generator::config[1225]: No configuration found. Mar 2 13:05:55.734232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:05:55.761438 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:05:55.883267 systemd[1]: Reloading finished in 2258 ms. Mar 2 13:05:56.101732 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:05:56.125842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:05:56.156838 systemd[1]: Starting ensure-sysext.service... Mar 2 13:05:56.166463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:05:56.399846 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:05:56.400069 systemd[1]: Reloading... Mar 2 13:05:56.700774 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:05:56.704532 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:05:56.721720 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:05:56.722530 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 2 13:05:56.723134 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 2 13:05:56.735188 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:05:56.737248 systemd-tmpfiles[1264]: Skipping /boot Mar 2 13:05:56.777415 zram_generator::config[1289]: No configuration found. Mar 2 13:05:57.043678 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:05:57.043849 systemd-tmpfiles[1264]: Skipping /boot Mar 2 13:05:57.505864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:05:57.694532 systemd[1]: Reloading finished in 1293 ms. Mar 2 13:05:57.746044 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:05:57.753182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:05:57.824584 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:05:57.895713 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:05:57.948732 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:05:57.984720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:05:58.021802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:05:58.037225 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:05:58.049477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:58.049752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:05:58.064047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:05:58.081029 augenrules[1352]: No rules Mar 2 13:05:58.085955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:05:58.112124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:05:58.122688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:05:58.136507 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:05:58.139560 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Mar 2 13:05:58.141368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:58.144169 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:05:58.161151 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:05:58.171411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:05:58.171724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:05:58.178796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:05:58.179446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:05:58.199253 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:05:58.228868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:05:58.266661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:05:58.281058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:58.281266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:05:58.290663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:05:58.313558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:05:58.323873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:05:58.336972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:05:58.344121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:05:58.357277 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:05:58.364094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:05:58.364905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:05:58.372787 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:05:58.383230 systemd[1]: Finished ensure-sysext.service. Mar 2 13:05:58.393241 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:05:58.400053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:05:58.400452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:05:58.424394 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:05:58.424717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:05:58.430667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:05:58.430980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:05:58.437278 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:05:58.437886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:05:58.482937 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:05:58.487639 systemd-resolved[1346]: Positive Trust Anchors: Mar 2 13:05:58.487651 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:05:58.487696 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:05:58.497836 systemd-resolved[1346]: Defaulting to hostname 'linux'. Mar 2 13:05:58.529380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1370) Mar 2 13:05:58.536611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:05:58.543616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:05:58.543749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:05:58.553628 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:05:58.900157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:05:58.900972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:05:58.928113 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:05:58.935530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:05:59.383453 systemd-networkd[1401]: lo: Link UP Mar 2 13:05:59.383493 systemd-networkd[1401]: lo: Gained carrier Mar 2 13:05:59.387057 systemd-networkd[1401]: Enumeration completed Mar 2 13:05:59.388726 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:05:59.388767 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:05:59.389232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:05:59.391542 systemd-networkd[1401]: eth0: Link UP Mar 2 13:05:59.391580 systemd-networkd[1401]: eth0: Gained carrier Mar 2 13:05:59.391602 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:05:59.413802 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:05:59.428822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:05:59.443063 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:05:59.441780 systemd[1]: Reached target network.target - Network. Mar 2 13:05:59.577426 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:05:59.581657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:05:59.627432 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:05:59.632935 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:05:59.639944 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:05:59.631965 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:05:59.638673 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:05:59.656738 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:06:00.480928 systemd-resolved[1346]: Clock change detected. Flushing caches. Mar 2 13:06:00.480939 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:06:00.481103 systemd-timesyncd[1405]: Initial clock synchronization to Mon 2026-03-02 13:06:00.480258 UTC. Mar 2 13:06:00.526124 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:06:00.564887 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:06:00.574113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:06:00.593885 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:06:01.656993 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 2 13:06:01.772146 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:06:01.846443 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:06:01.972051 kernel: kvm_amd: TSC scaling supported Mar 2 13:06:01.972210 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:06:01.972243 kernel: kvm_amd: Nested Paging enabled Mar 2 13:06:01.972316 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:06:01.972427 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:06:02.196900 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:06:02.263190 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:06:02.332051 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:06:02.343593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:02.367197 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:06:03.170751 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:06:03.185286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:06:03.191060 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:06:03.202980 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:06:03.208879 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:06:03.248194 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:06:03.254391 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:06:03.262182 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:06:03.268626 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:06:03.268959 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:06:03.279314 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:06:03.298448 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:06:03.311989 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:06:03.354716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:06:03.362743 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:06:03.370429 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:06:03.382255 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:06:03.389287 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:06:03.411689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:06:03.427209 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:06:03.447230 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:06:03.460921 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:06:03.460497 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:06:03.647945 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:06:03.673640 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:06:03.683094 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:06:03.688378 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:06:03.692454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:03.703704 jq[1438]: false Mar 2 13:06:03.706421 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:06:03.735163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:06:03.745596 dbus-daemon[1437]: [system] SELinux support is enabled Mar 2 13:06:03.766945 extend-filesystems[1439]: Found loop3 Mar 2 13:06:03.767207 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:06:03.816310 extend-filesystems[1439]: Found loop4 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found loop5 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found sr0 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda1 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda2 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda3 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found usr Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda4 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda6 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda7 Mar 2 13:06:03.816310 extend-filesystems[1439]: Found vda9 Mar 2 13:06:03.816310 extend-filesystems[1439]: Checking size of /dev/vda9 Mar 2 13:06:03.977316 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1370) Mar 2 13:06:03.977433 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:06:03.787140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:06:03.978027 extend-filesystems[1439]: Resized partition /dev/vda9 Mar 2 13:06:03.802000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:06:03.984095 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:06:04.046220 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:06:03.823677 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:06:03.847259 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:06:04.047174 update_engine[1463]: I20260302 13:06:03.955605 1463 main.cc:92] Flatcar Update Engine starting Mar 2 13:06:04.047174 update_engine[1463]: I20260302 13:06:03.958753 1463 update_check_scheduler.cc:74] Next update check in 4m56s Mar 2 13:06:03.849270 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:06:04.080378 jq[1466]: true Mar 2 13:06:04.080860 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:06:04.080860 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:06:04.080860 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:06:03.865396 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:06:04.132142 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Mar 2 13:06:03.884600 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:06:03.897668 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:06:03.911533 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:06:03.959858 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:06:04.145687 jq[1473]: true Mar 2 13:06:03.960225 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:06:03.963375 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:06:03.963741 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:06:03.971079 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:06:03.982268 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:06:03.982620 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:06:04.059970 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:06:04.079368 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:06:04.087916 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:06:04.087950 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:06:04.087957 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:06:04.088300 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:06:04.110554 systemd-logind[1457]: New seat seat0. Mar 2 13:06:04.111921 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:06:04.112274 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:06:04.138686 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:06:04.171047 tar[1472]: linux-amd64/LICENSE Mar 2 13:06:04.177347 tar[1472]: linux-amd64/helm Mar 2 13:06:04.178979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:06:04.179546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:06:04.180089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:06:04.188623 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:06:04.189211 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:06:04.213053 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:06:04.243122 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:06:04.253408 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:06:04.263711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:06:04.353171 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:06:04.533447 containerd[1474]: time="2026-03-02T13:06:04.532348203Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:06:04.591268 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:06:04.601664 containerd[1474]: time="2026-03-02T13:06:04.601544863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.608169 containerd[1474]: time="2026-03-02T13:06:04.608087676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:04.608338 containerd[1474]: time="2026-03-02T13:06:04.608307826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:06:04.608441 containerd[1474]: time="2026-03-02T13:06:04.608416279Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:06:04.609013 containerd[1474]: time="2026-03-02T13:06:04.608979531Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:06:04.609123 containerd[1474]: time="2026-03-02T13:06:04.609099004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.609395 containerd[1474]: time="2026-03-02T13:06:04.609360141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:04.609502 containerd[1474]: time="2026-03-02T13:06:04.609477550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.610443 containerd[1474]: time="2026-03-02T13:06:04.610406715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:04.610546 containerd[1474]: time="2026-03-02T13:06:04.610520336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.610702 containerd[1474]: time="2026-03-02T13:06:04.610672400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:04.610880 containerd[1474]: time="2026-03-02T13:06:04.610851174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.611169 containerd[1474]: time="2026-03-02T13:06:04.611139512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.611956 containerd[1474]: time="2026-03-02T13:06:04.611924297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:04.612369 containerd[1474]: time="2026-03-02T13:06:04.612333811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:04.612470 containerd[1474]: time="2026-03-02T13:06:04.612445761Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:06:04.612878 containerd[1474]: time="2026-03-02T13:06:04.612847771Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:06:04.613077 containerd[1474]: time="2026-03-02T13:06:04.613039338Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:06:04.636120 containerd[1474]: time="2026-03-02T13:06:04.636048084Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:06:04.636358 containerd[1474]: time="2026-03-02T13:06:04.636332585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:06:04.636487 containerd[1474]: time="2026-03-02T13:06:04.636466595Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:06:04.636658 containerd[1474]: time="2026-03-02T13:06:04.636629600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:06:04.636873 containerd[1474]: time="2026-03-02T13:06:04.636747840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:06:04.637412 containerd[1474]: time="2026-03-02T13:06:04.637384739Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:06:04.640857 containerd[1474]: time="2026-03-02T13:06:04.640687044Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643205625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643247692Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643268993Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643289851Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643310220Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643328774Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643350415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643373217Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643400608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643418462Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643436405Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643465910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643486328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.643888 containerd[1474]: time="2026-03-02T13:06:04.643506025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643533737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643553293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643638192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643665703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643689427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643713272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643734351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644346 containerd[1474]: time="2026-03-02T13:06:04.643751543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.644684 containerd[1474]: time="2026-03-02T13:06:04.644653136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.645070 containerd[1474]: time="2026-03-02T13:06:04.644757340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.645070 containerd[1474]: time="2026-03-02T13:06:04.644891491Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:06:04.645070 containerd[1474]: time="2026-03-02T13:06:04.644936274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.645070 containerd[1474]: time="2026-03-02T13:06:04.644960229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.645070 containerd[1474]: time="2026-03-02T13:06:04.644988061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:06:04.645488 containerd[1474]: time="2026-03-02T13:06:04.645466143Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:06:04.645896 containerd[1474]: time="2026-03-02T13:06:04.645558806Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:06:04.645896 containerd[1474]: time="2026-03-02T13:06:04.645716741Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:06:04.645896 containerd[1474]: time="2026-03-02T13:06:04.645739764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:06:04.645896 containerd[1474]: time="2026-03-02T13:06:04.645750314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.647149 containerd[1474]: time="2026-03-02T13:06:04.646032992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:06:04.647149 containerd[1474]: time="2026-03-02T13:06:04.646063288Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:06:04.647149 containerd[1474]: time="2026-03-02T13:06:04.646077225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:06:04.647272 containerd[1474]: time="2026-03-02T13:06:04.646441394Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:06:04.647272 containerd[1474]: time="2026-03-02T13:06:04.646517075Z" level=info msg="Connect containerd service" Mar 2 13:06:04.647272 containerd[1474]: time="2026-03-02T13:06:04.646625157Z" level=info msg="using legacy CRI server" Mar 2 13:06:04.647272 containerd[1474]: time="2026-03-02T13:06:04.646642830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:06:04.647272 containerd[1474]: time="2026-03-02T13:06:04.646852852Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:06:04.650057 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:06:04.661422 containerd[1474]: time="2026-03-02T13:06:04.661305152Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:06:04.662213 containerd[1474]: time="2026-03-02T13:06:04.661958322Z" level=info msg="Start subscribing containerd event" Mar 2 13:06:04.662213 containerd[1474]: time="2026-03-02T13:06:04.662079889Z" level=info msg="Start recovering state" Mar 2 13:06:04.662213 containerd[1474]: time="2026-03-02T13:06:04.662195885Z" level=info msg="Start event monitor" Mar 2 13:06:04.662213 containerd[1474]: time="2026-03-02T13:06:04.662217306Z" level=info msg="Start snapshots syncer" Mar 2 13:06:04.662483 containerd[1474]: time="2026-03-02T13:06:04.662232063Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:06:04.662483 containerd[1474]: time="2026-03-02T13:06:04.662243254Z" level=info msg="Start streaming server" Mar 2 13:06:04.663295 containerd[1474]: time="2026-03-02T13:06:04.663118668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:06:04.663676 containerd[1474]: time="2026-03-02T13:06:04.663551365Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:06:04.666416 containerd[1474]: time="2026-03-02T13:06:04.665961414Z" level=info msg="containerd successfully booted in 0.136625s" Mar 2 13:06:04.677131 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:06:04.684957 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:06:04.717908 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:06:04.718280 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:06:04.738684 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:06:04.762758 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:06:04.778917 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:06:04.795648 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:06:04.796165 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:06:04.940337 tar[1472]: linux-amd64/README.md Mar 2 13:06:04.966543 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:06:05.602582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:05.611292 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:06:05.617147 systemd[1]: Startup finished in 4.871s (kernel) + 19.128s (initrd) + 17.630s (userspace) = 41.630s. Mar 2 13:06:05.693503 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:06.633942 kubelet[1548]: E0302 13:06:06.633692 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:06.639949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:06.640276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:06.640934 systemd[1]: kubelet.service: Consumed 1.444s CPU time. Mar 2 13:06:11.595100 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:06:11.651698 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:57764.service - OpenSSH per-connection server daemon (10.0.0.1:57764). Mar 2 13:06:14.607128 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 57764 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:14.618447 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:14.691145 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:06:14.705565 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:06:14.712009 systemd-logind[1457]: New session 1 of user core. Mar 2 13:06:14.806561 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:06:14.855758 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:06:14.985365 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:06:15.286539 systemd[1566]: Queued start job for default target default.target. Mar 2 13:06:15.301493 systemd[1566]: Created slice app.slice - User Application Slice. Mar 2 13:06:15.301592 systemd[1566]: Reached target paths.target - Paths. Mar 2 13:06:15.301619 systemd[1566]: Reached target timers.target - Timers. Mar 2 13:06:15.306351 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:06:15.359412 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:06:15.359701 systemd[1566]: Reached target sockets.target - Sockets. Mar 2 13:06:15.359722 systemd[1566]: Reached target basic.target - Basic System. Mar 2 13:06:15.360023 systemd[1566]: Reached target default.target - Main User Target. Mar 2 13:06:15.360087 systemd[1566]: Startup finished in 342ms. Mar 2 13:06:15.360358 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:06:15.370263 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:06:15.489558 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:57780.service - OpenSSH per-connection server daemon (10.0.0.1:57780). Mar 2 13:06:15.596056 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:15.600105 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:15.628678 systemd-logind[1457]: New session 2 of user core. Mar 2 13:06:15.640174 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:06:15.753208 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:15.769575 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:57780.service: Deactivated successfully. Mar 2 13:06:15.772919 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:06:15.775964 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:06:15.801435 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Mar 2 13:06:15.805371 systemd-logind[1457]: Removed session 2. Mar 2 13:06:15.877280 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:15.881681 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:15.898625 systemd-logind[1457]: New session 3 of user core. Mar 2 13:06:15.917209 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:06:15.992685 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:16.007240 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:57794.service: Deactivated successfully. Mar 2 13:06:16.009471 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:06:16.013435 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:06:16.034372 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:57802.service - OpenSSH per-connection server daemon (10.0.0.1:57802). Mar 2 13:06:16.038094 systemd-logind[1457]: Removed session 3. Mar 2 13:06:16.098024 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 57802 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:16.105716 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:16.132746 systemd-logind[1457]: New session 4 of user core. Mar 2 13:06:16.146325 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:06:16.245559 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:16.262241 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:57802.service: Deactivated successfully. Mar 2 13:06:16.268604 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:06:16.272628 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:06:16.288455 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:57818.service - OpenSSH per-connection server daemon (10.0.0.1:57818). Mar 2 13:06:16.292257 systemd-logind[1457]: Removed session 4. Mar 2 13:06:16.356134 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 57818 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:16.360044 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:16.376347 systemd-logind[1457]: New session 5 of user core. Mar 2 13:06:16.385159 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:06:16.496555 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:06:16.497273 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:06:16.806704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:06:16.839268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:17.764126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:17.785054 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:18.747896 kubelet[1621]: E0302 13:06:18.746301 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:18.756011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:18.756936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:18.758300 systemd[1]: kubelet.service: Consumed 1.381s CPU time. Mar 2 13:06:20.309231 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:06:20.409549 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:06:24.288717 dockerd[1636]: time="2026-03-02T13:06:24.287561321Z" level=info msg="Starting up" Mar 2 13:06:25.798247 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2726767011-merged.mount: Deactivated successfully. Mar 2 13:06:26.164523 dockerd[1636]: time="2026-03-02T13:06:26.163032738Z" level=info msg="Loading containers: start." Mar 2 13:06:27.246414 kernel: Initializing XFRM netlink socket Mar 2 13:06:27.893625 systemd-networkd[1401]: docker0: Link UP Mar 2 13:06:28.082733 dockerd[1636]: time="2026-03-02T13:06:28.082133808Z" level=info msg="Loading containers: done." Mar 2 13:06:28.292920 dockerd[1636]: time="2026-03-02T13:06:28.292426806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:06:28.294032 dockerd[1636]: time="2026-03-02T13:06:28.293332707Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:06:28.294032 dockerd[1636]: time="2026-03-02T13:06:28.293875981Z" level=info msg="Daemon has completed initialization" Mar 2 13:06:28.813227 dockerd[1636]: time="2026-03-02T13:06:28.811290266Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:06:28.815283 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:06:28.823079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:06:28.924583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:30.064980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:30.071052 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:30.546043 kubelet[1789]: E0302 13:06:30.545247 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:30.555888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:30.556258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:33.234173 containerd[1474]: time="2026-03-02T13:06:33.233555333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 13:06:35.470035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683883524.mount: Deactivated successfully. Mar 2 13:06:39.087559 containerd[1474]: time="2026-03-02T13:06:39.085186602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:39.089294 containerd[1474]: time="2026-03-02T13:06:39.089202718Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 13:06:39.095219 containerd[1474]: time="2026-03-02T13:06:39.095111707Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:39.101053 containerd[1474]: time="2026-03-02T13:06:39.100956900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:39.103197 containerd[1474]: time="2026-03-02T13:06:39.102871930Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 5.869226134s" Mar 2 13:06:39.103197 containerd[1474]: time="2026-03-02T13:06:39.102921250Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 13:06:39.106182 containerd[1474]: time="2026-03-02T13:06:39.105433876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 13:06:40.557672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:06:40.571694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:40.827040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:40.835063 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:40.961361 kubelet[1871]: E0302 13:06:40.961298 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:40.969887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:40.970342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:41.581288 containerd[1474]: time="2026-03-02T13:06:41.580900489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:41.584589 containerd[1474]: time="2026-03-02T13:06:41.583325761Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 13:06:41.586415 containerd[1474]: time="2026-03-02T13:06:41.586309942Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:41.594366 containerd[1474]: time="2026-03-02T13:06:41.593946306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:41.598629 containerd[1474]: time="2026-03-02T13:06:41.598228746Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.492752553s" Mar 2 13:06:41.598629 containerd[1474]: time="2026-03-02T13:06:41.598336823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 13:06:41.601078 containerd[1474]: time="2026-03-02T13:06:41.600949790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 13:06:43.661289 containerd[1474]: time="2026-03-02T13:06:43.660340191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:43.666199 containerd[1474]: time="2026-03-02T13:06:43.666001686Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 13:06:43.675292 containerd[1474]: time="2026-03-02T13:06:43.673046986Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:43.684700 containerd[1474]: time="2026-03-02T13:06:43.684550835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:43.686894 containerd[1474]: time="2026-03-02T13:06:43.686731311Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.08573726s" Mar 2 13:06:43.686894 containerd[1474]: time="2026-03-02T13:06:43.686871146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 13:06:43.688678 containerd[1474]: time="2026-03-02T13:06:43.687713263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 13:06:45.580080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998252408.mount: Deactivated successfully. Mar 2 13:06:46.144753 containerd[1474]: time="2026-03-02T13:06:46.141991330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:46.146669 containerd[1474]: time="2026-03-02T13:06:46.146428529Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 13:06:46.149311 containerd[1474]: time="2026-03-02T13:06:46.149266608Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:46.154672 containerd[1474]: time="2026-03-02T13:06:46.154415751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:46.156619 containerd[1474]: time="2026-03-02T13:06:46.156008223Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.468248545s" Mar 2 13:06:46.156619 containerd[1474]: time="2026-03-02T13:06:46.156090003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 13:06:46.159004 containerd[1474]: time="2026-03-02T13:06:46.158531645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 13:06:46.793053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209999895.mount: Deactivated successfully. Mar 2 13:06:49.002311 update_engine[1463]: I20260302 13:06:48.996707 1463 update_attempter.cc:509] Updating boot flags... Mar 2 13:06:49.232922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1951) Mar 2 13:06:49.396079 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1952) Mar 2 13:06:51.064119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:06:51.108293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:51.906740 containerd[1474]: time="2026-03-02T13:06:51.905420843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:51.949432 containerd[1474]: time="2026-03-02T13:06:51.915564616Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 13:06:51.953255 containerd[1474]: time="2026-03-02T13:06:51.953087883Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:51.960377 containerd[1474]: time="2026-03-02T13:06:51.960286075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:51.962950 containerd[1474]: time="2026-03-02T13:06:51.962892452Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 5.804260472s" Mar 2 13:06:51.963081 containerd[1474]: time="2026-03-02T13:06:51.962949205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 13:06:51.970851 containerd[1474]: time="2026-03-02T13:06:51.970333511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:06:52.088142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:52.097648 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:52.446196 kubelet[1965]: E0302 13:06:52.445625 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:52.473464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:52.474063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:53.486633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070345794.mount: Deactivated successfully. Mar 2 13:06:53.511156 containerd[1474]: time="2026-03-02T13:06:53.509204470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:53.518921 containerd[1474]: time="2026-03-02T13:06:53.518664303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:06:53.519110 containerd[1474]: time="2026-03-02T13:06:53.518915217Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:53.553332 containerd[1474]: time="2026-03-02T13:06:53.551893294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:53.579245 containerd[1474]: time="2026-03-02T13:06:53.575231052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.604752192s" Mar 2 13:06:53.579245 containerd[1474]: time="2026-03-02T13:06:53.579994617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:06:53.605390 containerd[1474]: time="2026-03-02T13:06:53.603987880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 13:06:56.616963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128475823.mount: Deactivated successfully. Mar 2 13:07:02.561652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 13:07:02.578139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:02.991329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:03.000989 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:03.214941 kubelet[2035]: E0302 13:07:03.214290 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:03.220995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:03.221412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:04.188084 containerd[1474]: time="2026-03-02T13:07:04.187303470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:04.195337 containerd[1474]: time="2026-03-02T13:07:04.191098111Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 13:07:04.199487 containerd[1474]: time="2026-03-02T13:07:04.197059691Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:04.215444 containerd[1474]: time="2026-03-02T13:07:04.214759057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:04.218118 containerd[1474]: time="2026-03-02T13:07:04.218040485Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 10.609092686s" Mar 2 13:07:04.218118 containerd[1474]: time="2026-03-02T13:07:04.218082433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 13:07:13.311125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 2 13:07:13.338151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:13.819357 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:13.819452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:13.982406 kubelet[2090]: E0302 13:07:13.982324 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:13.987972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:13.988392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:16.238401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:16.253368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:16.309333 systemd[1]: Reloading requested from client PID 2106 ('systemctl') (unit session-5.scope)... Mar 2 13:07:16.311225 systemd[1]: Reloading... Mar 2 13:07:16.486948 zram_generator::config[2145]: No configuration found. Mar 2 13:07:16.690534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:07:16.818579 systemd[1]: Reloading finished in 505 ms. Mar 2 13:07:16.925571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:16.936299 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:07:16.936875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:16.950755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:17.198322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:17.210545 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:07:17.484009 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:07:17.484009 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:07:17.484009 kubelet[2195]: I0302 13:07:17.482694 2195 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:07:18.484555 kubelet[2195]: I0302 13:07:18.483951 2195 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:07:18.484555 kubelet[2195]: I0302 13:07:18.484165 2195 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:07:18.486525 kubelet[2195]: I0302 13:07:18.486487 2195 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:07:18.486559 kubelet[2195]: I0302 13:07:18.486509 2195 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:07:18.487852 kubelet[2195]: I0302 13:07:18.487171 2195 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:07:18.595008 kubelet[2195]: E0302 13:07:18.594921 2195 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:07:18.605282 kubelet[2195]: I0302 13:07:18.605188 2195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:07:18.619190 kubelet[2195]: E0302 13:07:18.618909 2195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:07:18.619190 kubelet[2195]: I0302 13:07:18.619040 2195 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:07:18.661681 kubelet[2195]: I0302 13:07:18.659614 2195 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:07:18.672418 kubelet[2195]: I0302 13:07:18.668565 2195 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:07:18.672418 kubelet[2195]: I0302 13:07:18.672090 2195 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:07:18.672418 kubelet[2195]: I0302 13:07:18.673024 2195 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:07:18.672418 kubelet[2195]: I0302 13:07:18.673051 2195 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:07:18.677024 kubelet[2195]: I0302 13:07:18.674583 2195 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:07:18.700747 kubelet[2195]: I0302 13:07:18.700052 2195 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:18.703138 kubelet[2195]: I0302 13:07:18.701721 2195 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:07:18.703138 kubelet[2195]: I0302 13:07:18.701866 2195 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:07:18.703138 kubelet[2195]: I0302 13:07:18.701950 2195 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:07:18.703138 kubelet[2195]: I0302 13:07:18.702068 2195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:07:18.704586 kubelet[2195]: E0302 13:07:18.704464 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:07:18.705584 kubelet[2195]: E0302 13:07:18.705417 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:07:18.709907 kubelet[2195]: I0302 13:07:18.709695 2195 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:07:18.710569 kubelet[2195]: I0302 13:07:18.710493 2195 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:07:18.710569 kubelet[2195]: I0302 13:07:18.710569 2195 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:07:18.710943 kubelet[2195]: W0302 13:07:18.710710 2195 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:07:18.736610 kubelet[2195]: I0302 13:07:18.729496 2195 server.go:1262] "Started kubelet" Mar 2 13:07:18.744082 kubelet[2195]: I0302 13:07:18.737117 2195 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:07:18.744082 kubelet[2195]: I0302 13:07:18.743993 2195 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:07:18.747381 kubelet[2195]: I0302 13:07:18.745135 2195 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:07:18.747381 kubelet[2195]: I0302 13:07:18.740089 2195 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:07:18.755930 kubelet[2195]: I0302 13:07:18.753036 2195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:07:18.755930 kubelet[2195]: I0302 13:07:18.753972 2195 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:07:18.759025 kubelet[2195]: I0302 13:07:18.757876 2195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:07:18.760940 kubelet[2195]: I0302 13:07:18.758840 2195 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:07:18.761236 kubelet[2195]: I0302 13:07:18.758856 2195 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:07:18.761236 kubelet[2195]: E0302 13:07:18.759302 2195 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:18.761449 kubelet[2195]: I0302 13:07:18.761288 2195 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:07:18.761449 kubelet[2195]: E0302 13:07:18.761423 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Mar 2 13:07:18.761722 kubelet[2195]: E0302 13:07:18.758204 2195 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:07:18.761722 kubelet[2195]: E0302 13:07:18.761371 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:18.762705 kubelet[2195]: E0302 13:07:18.758870 2195 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899081a9c1f5354 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:07:18.72847138 +0000 UTC m=+1.510504628,LastTimestamp:2026-03-02 13:07:18.72847138 +0000 UTC m=+1.510504628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:07:18.764559 kubelet[2195]: I0302 13:07:18.764421 2195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:07:18.770431 kubelet[2195]: I0302 13:07:18.770340 2195 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:07:18.770431 kubelet[2195]: I0302 13:07:18.770411 2195 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:07:18.802539 kubelet[2195]: I0302 13:07:18.802460 2195 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:07:18.802539 kubelet[2195]: I0302 13:07:18.802500 2195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:07:18.802539 kubelet[2195]: I0302 13:07:18.802539 2195 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:18.807711 kubelet[2195]: I0302 13:07:18.807617 2195 policy_none.go:49] "None policy: Start" Mar 2 13:07:18.807711 kubelet[2195]: I0302 13:07:18.807708 2195 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:07:18.807956 kubelet[2195]: I0302 13:07:18.807734 2195 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:07:18.812722 kubelet[2195]: I0302 13:07:18.812304 2195 policy_none.go:47] "Start" Mar 2 13:07:18.832716 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:07:18.835626 kubelet[2195]: I0302 13:07:18.835515 2195 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:07:18.838982 kubelet[2195]: I0302 13:07:18.838892 2195 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:07:18.838982 kubelet[2195]: I0302 13:07:18.838948 2195 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:07:18.838982 kubelet[2195]: I0302 13:07:18.838981 2195 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:07:18.839911 kubelet[2195]: E0302 13:07:18.839047 2195 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:07:18.840463 kubelet[2195]: E0302 13:07:18.840369 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:07:18.853315 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:07:18.861871 kubelet[2195]: E0302 13:07:18.861515 2195 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:18.862690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:07:18.883348 kubelet[2195]: E0302 13:07:18.882286 2195 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:07:18.884364 kubelet[2195]: I0302 13:07:18.884312 2195 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:07:18.884364 kubelet[2195]: I0302 13:07:18.884337 2195 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:07:18.885232 kubelet[2195]: I0302 13:07:18.884885 2195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:07:18.888011 kubelet[2195]: E0302 13:07:18.887879 2195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:07:18.888086 kubelet[2195]: E0302 13:07:18.888016 2195 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:07:18.971205 kubelet[2195]: E0302 13:07:18.970563 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Mar 2 13:07:19.064414 kubelet[2195]: I0302 13:07:19.064133 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:19.064414 kubelet[2195]: I0302 13:07:19.064339 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:19.065616 kubelet[2195]: I0302 13:07:19.064407 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:19.075121 kubelet[2195]: I0302 13:07:19.074888 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:19.075604 kubelet[2195]: E0302 13:07:19.075482 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 2 13:07:19.090491 systemd[1]: Created slice kubepods-burstable-poda0615b161db6e7302f4654e0a189e6aa.slice - libcontainer container kubepods-burstable-poda0615b161db6e7302f4654e0a189e6aa.slice. Mar 2 13:07:19.112301 kubelet[2195]: E0302 13:07:19.112109 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:19.119027 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 13:07:19.137847 kubelet[2195]: E0302 13:07:19.137691 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:19.146498 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 13:07:19.154555 kubelet[2195]: E0302 13:07:19.154456 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:19.165979 kubelet[2195]: I0302 13:07:19.164989 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:19.165979 kubelet[2195]: I0302 13:07:19.165076 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:19.165979 kubelet[2195]: I0302 13:07:19.165122 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:19.165979 kubelet[2195]: I0302 13:07:19.165149 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:19.165979 kubelet[2195]: I0302 13:07:19.165177 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:19.166326 kubelet[2195]: I0302 13:07:19.165224 2195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:19.304115 kubelet[2195]: I0302 13:07:19.296319 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:19.304115 kubelet[2195]: E0302 13:07:19.297716 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 2 13:07:19.401696 kubelet[2195]: E0302 13:07:19.376640 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Mar 2 13:07:19.419547 kubelet[2195]: E0302 13:07:19.419418 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:19.437611 containerd[1474]: time="2026-03-02T13:07:19.437495131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0615b161db6e7302f4654e0a189e6aa,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:19.450520 kubelet[2195]: E0302 13:07:19.450428 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:19.451929 containerd[1474]: time="2026-03-02T13:07:19.451709669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:19.468464 kubelet[2195]: E0302 13:07:19.468306 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:19.470308 containerd[1474]: time="2026-03-02T13:07:19.470221604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:19.591889 kubelet[2195]: E0302 13:07:19.590381 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:07:19.699326 kubelet[2195]: E0302 13:07:19.692166 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:07:19.703131 kubelet[2195]: I0302 13:07:19.703007 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:19.703912 kubelet[2195]: E0302 13:07:19.703751 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 2 13:07:19.739291 kubelet[2195]: E0302 13:07:19.738213 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:07:19.845706 kubelet[2195]: E0302 13:07:19.845260 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:20.180874 kubelet[2195]: E0302 13:07:20.180505 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Mar 2 13:07:20.305329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619473327.mount: Deactivated successfully. Mar 2 13:07:20.366215 containerd[1474]: time="2026-03-02T13:07:20.365656213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:20.370271 containerd[1474]: time="2026-03-02T13:07:20.369949485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:07:20.373481 containerd[1474]: time="2026-03-02T13:07:20.372316476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:20.376083 containerd[1474]: time="2026-03-02T13:07:20.375956837Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:20.381570 containerd[1474]: time="2026-03-02T13:07:20.381116857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:07:20.381570 containerd[1474]: time="2026-03-02T13:07:20.381486105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:20.382002 containerd[1474]: time="2026-03-02T13:07:20.381887984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:07:20.386075 containerd[1474]: time="2026-03-02T13:07:20.386010284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:20.389070 containerd[1474]: time="2026-03-02T13:07:20.389013009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 951.274986ms" Mar 2 13:07:20.389960 containerd[1474]: time="2026-03-02T13:07:20.389930983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 919.563267ms" Mar 2 13:07:20.391905 containerd[1474]: time="2026-03-02T13:07:20.391622905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 939.724214ms" Mar 2 13:07:20.540157 kubelet[2195]: I0302 13:07:20.538337 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:20.540157 kubelet[2195]: E0302 13:07:20.539550 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 2 13:07:20.714032 kubelet[2195]: E0302 13:07:20.713394 2195 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.483953674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.484589639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.484616979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.484338406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.484581509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:21.485171 containerd[1474]: time="2026-03-02T13:07:21.484600755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.489912707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.489965175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.490003266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.490134531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.485117838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.495297 containerd[1474]: time="2026-03-02T13:07:21.485099078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:21.576517 systemd[1]: Started cri-containerd-034c4e5da14b5b32bdac419b328fe3822ea3c5d5117f877487ecebbb5b253fc6.scope - libcontainer container 034c4e5da14b5b32bdac419b328fe3822ea3c5d5117f877487ecebbb5b253fc6. Mar 2 13:07:21.583141 systemd[1]: Started cri-containerd-5f69f682bc11047c0975cc7412be4b3c7972dad1123fa32aa78091a4fa075af6.scope - libcontainer container 5f69f682bc11047c0975cc7412be4b3c7972dad1123fa32aa78091a4fa075af6. Mar 2 13:07:21.591238 systemd[1]: Started cri-containerd-7f90f1cf1a491037e041454e33bba53cca79fd5128ae949b93d767bb8128a32c.scope - libcontainer container 7f90f1cf1a491037e041454e33bba53cca79fd5128ae949b93d767bb8128a32c. Mar 2 13:07:21.676319 containerd[1474]: time="2026-03-02T13:07:21.675951967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0615b161db6e7302f4654e0a189e6aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"034c4e5da14b5b32bdac419b328fe3822ea3c5d5117f877487ecebbb5b253fc6\"" Mar 2 13:07:21.681425 kubelet[2195]: E0302 13:07:21.681065 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:21.706910 containerd[1474]: time="2026-03-02T13:07:21.705574581Z" level=info msg="CreateContainer within sandbox \"034c4e5da14b5b32bdac419b328fe3822ea3c5d5117f877487ecebbb5b253fc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:07:21.719042 containerd[1474]: time="2026-03-02T13:07:21.718909079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f69f682bc11047c0975cc7412be4b3c7972dad1123fa32aa78091a4fa075af6\"" Mar 2 13:07:21.720536 kubelet[2195]: E0302 13:07:21.720181 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:21.757968 containerd[1474]: time="2026-03-02T13:07:21.756250847Z" level=info msg="CreateContainer within sandbox \"5f69f682bc11047c0975cc7412be4b3c7972dad1123fa32aa78091a4fa075af6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:07:21.763500 containerd[1474]: time="2026-03-02T13:07:21.763174217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f90f1cf1a491037e041454e33bba53cca79fd5128ae949b93d767bb8128a32c\"" Mar 2 13:07:21.767296 kubelet[2195]: E0302 13:07:21.766983 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:21.782561 kubelet[2195]: E0302 13:07:21.782486 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Mar 2 13:07:21.783462 containerd[1474]: time="2026-03-02T13:07:21.783369726Z" level=info msg="CreateContainer within sandbox \"7f90f1cf1a491037e041454e33bba53cca79fd5128ae949b93d767bb8128a32c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:07:21.796282 containerd[1474]: time="2026-03-02T13:07:21.796070979Z" level=info msg="CreateContainer within sandbox \"034c4e5da14b5b32bdac419b328fe3822ea3c5d5117f877487ecebbb5b253fc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9408b821b6d09f08b3f7c0076d35a43b29ed789e36a1255cfbef37c035532d31\"" Mar 2 13:07:21.797956 containerd[1474]: time="2026-03-02T13:07:21.797874149Z" level=info msg="StartContainer for \"9408b821b6d09f08b3f7c0076d35a43b29ed789e36a1255cfbef37c035532d31\"" Mar 2 13:07:21.820172 containerd[1474]: time="2026-03-02T13:07:21.820105104Z" level=info msg="CreateContainer within sandbox \"5f69f682bc11047c0975cc7412be4b3c7972dad1123fa32aa78091a4fa075af6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"625ab6c0783864c2653471c2fbf6b0197dca220f1ac19cdfe560bd900870494a\"" Mar 2 13:07:21.826248 containerd[1474]: time="2026-03-02T13:07:21.824311641Z" level=info msg="StartContainer for \"625ab6c0783864c2653471c2fbf6b0197dca220f1ac19cdfe560bd900870494a\"" Mar 2 13:07:21.842839 containerd[1474]: time="2026-03-02T13:07:21.837100005Z" level=info msg="CreateContainer within sandbox \"7f90f1cf1a491037e041454e33bba53cca79fd5128ae949b93d767bb8128a32c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb5c0deecae8755ea0f226d5e2393e8c9c204482d1bf18c4ff44b94c4abef8d0\"" Mar 2 13:07:21.842839 containerd[1474]: time="2026-03-02T13:07:21.841506685Z" level=info msg="StartContainer for \"bb5c0deecae8755ea0f226d5e2393e8c9c204482d1bf18c4ff44b94c4abef8d0\"" Mar 2 13:07:21.874045 systemd[1]: Started cri-containerd-9408b821b6d09f08b3f7c0076d35a43b29ed789e36a1255cfbef37c035532d31.scope - libcontainer container 9408b821b6d09f08b3f7c0076d35a43b29ed789e36a1255cfbef37c035532d31. Mar 2 13:07:21.905037 systemd[1]: Started cri-containerd-625ab6c0783864c2653471c2fbf6b0197dca220f1ac19cdfe560bd900870494a.scope - libcontainer container 625ab6c0783864c2653471c2fbf6b0197dca220f1ac19cdfe560bd900870494a. Mar 2 13:07:21.929044 kubelet[2195]: E0302 13:07:21.928623 2195 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:21.955343 systemd[1]: Started cri-containerd-bb5c0deecae8755ea0f226d5e2393e8c9c204482d1bf18c4ff44b94c4abef8d0.scope - libcontainer container bb5c0deecae8755ea0f226d5e2393e8c9c204482d1bf18c4ff44b94c4abef8d0. Mar 2 13:07:22.012501 containerd[1474]: time="2026-03-02T13:07:22.012238311Z" level=info msg="StartContainer for \"9408b821b6d09f08b3f7c0076d35a43b29ed789e36a1255cfbef37c035532d31\" returns successfully" Mar 2 13:07:22.144838 kubelet[2195]: I0302 13:07:22.142340 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:22.144838 kubelet[2195]: E0302 13:07:22.142941 2195 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 2 13:07:22.311749 containerd[1474]: time="2026-03-02T13:07:22.311633839Z" level=info msg="StartContainer for \"bb5c0deecae8755ea0f226d5e2393e8c9c204482d1bf18c4ff44b94c4abef8d0\" returns successfully" Mar 2 13:07:22.365162 containerd[1474]: time="2026-03-02T13:07:22.365053304Z" level=info msg="StartContainer for \"625ab6c0783864c2653471c2fbf6b0197dca220f1ac19cdfe560bd900870494a\" returns successfully" Mar 2 13:07:22.986922 kubelet[2195]: E0302 13:07:22.982486 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:22.986922 kubelet[2195]: E0302 13:07:22.982941 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:23.003413 kubelet[2195]: E0302 13:07:23.002198 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:23.003413 kubelet[2195]: E0302 13:07:23.002447 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:23.024647 kubelet[2195]: E0302 13:07:23.020453 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:23.035316 kubelet[2195]: E0302 13:07:23.025455 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:24.017242 kubelet[2195]: E0302 13:07:24.015871 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:24.017242 kubelet[2195]: E0302 13:07:24.016156 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:24.018591 kubelet[2195]: E0302 13:07:24.017383 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:24.018591 kubelet[2195]: E0302 13:07:24.017566 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:24.018591 kubelet[2195]: E0302 13:07:24.017594 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:24.020012 kubelet[2195]: E0302 13:07:24.019877 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:25.027637 kubelet[2195]: E0302 13:07:25.027561 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:25.028288 kubelet[2195]: E0302 13:07:25.027876 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:25.028288 kubelet[2195]: E0302 13:07:25.028178 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:25.028360 kubelet[2195]: E0302 13:07:25.028330 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:25.034435 kubelet[2195]: E0302 13:07:25.028711 2195 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:25.034435 kubelet[2195]: E0302 13:07:25.029676 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:25.346076 kubelet[2195]: I0302 13:07:25.345907 2195 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:25.772070 kubelet[2195]: E0302 13:07:25.771896 2195 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:07:25.838904 kubelet[2195]: I0302 13:07:25.838047 2195 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:07:25.860435 kubelet[2195]: I0302 13:07:25.860341 2195 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:25.873559 kubelet[2195]: E0302 13:07:25.873283 2195 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:25.873559 kubelet[2195]: I0302 13:07:25.873358 2195 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:25.875617 kubelet[2195]: E0302 13:07:25.875536 2195 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:25.875617 kubelet[2195]: I0302 13:07:25.875602 2195 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:25.877905 kubelet[2195]: E0302 13:07:25.877696 2195 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:26.022620 kubelet[2195]: I0302 13:07:26.022354 2195 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:26.026090 kubelet[2195]: E0302 13:07:26.026026 2195 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:26.026285 kubelet[2195]: E0302 13:07:26.026264 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:26.223895 kubelet[2195]: I0302 13:07:26.223624 2195 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:26.227585 kubelet[2195]: E0302 13:07:26.227469 2195 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:26.227900 kubelet[2195]: E0302 13:07:26.227731 2195 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:26.725067 kubelet[2195]: I0302 13:07:26.724982 2195 apiserver.go:52] "Watching apiserver" Mar 2 13:07:26.762042 kubelet[2195]: I0302 13:07:26.761989 2195 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:07:31.457727 systemd[1]: Reloading requested from client PID 2494 ('systemctl') (unit session-5.scope)... Mar 2 13:07:31.457883 systemd[1]: Reloading... Mar 2 13:07:31.697930 zram_generator::config[2536]: No configuration found. Mar 2 13:07:32.266572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:07:32.466602 systemd[1]: Reloading finished in 1008 ms. Mar 2 13:07:32.582645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:32.644123 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:07:32.645148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:32.645239 systemd[1]: kubelet.service: Consumed 4.397s CPU time, 127.8M memory peak, 0B memory swap peak. Mar 2 13:07:32.765365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:33.307538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:33.317124 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:07:33.488094 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:07:33.488094 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:07:33.488094 kubelet[2577]: I0302 13:07:33.487613 2577 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.499579 2577 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.499614 2577 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.499658 2577 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.499679 2577 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.500177 2577 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:07:33.502660 kubelet[2577]: I0302 13:07:33.502170 2577 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:07:33.514700 kubelet[2577]: I0302 13:07:33.514507 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:07:33.545965 kubelet[2577]: E0302 13:07:33.545519 2577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:07:33.547592 kubelet[2577]: I0302 13:07:33.545721 2577 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:07:33.578190 kubelet[2577]: I0302 13:07:33.577154 2577 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:07:33.578190 kubelet[2577]: I0302 13:07:33.577872 2577 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:07:33.578190 kubelet[2577]: I0302 13:07:33.577914 2577 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:07:33.578190 kubelet[2577]: I0302 13:07:33.578144 2577 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.578159 2577 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.578199 2577 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.578818 2577 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.579247 2577 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.580288 2577 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.580322 2577 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:07:33.583620 kubelet[2577]: I0302 13:07:33.580343 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:07:33.586458 kubelet[2577]: I0302 13:07:33.584944 2577 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:07:33.586559 kubelet[2577]: I0302 13:07:33.586453 2577 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:07:33.586559 kubelet[2577]: I0302 13:07:33.586495 2577 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:07:33.638635 kubelet[2577]: I0302 13:07:33.638505 2577 server.go:1262] "Started kubelet" Mar 2 13:07:33.652149 kubelet[2577]: I0302 13:07:33.650160 2577 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:07:33.715220 kubelet[2577]: I0302 13:07:33.664046 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:07:33.733604 kubelet[2577]: I0302 13:07:33.730193 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:07:33.737264 kubelet[2577]: I0302 13:07:33.737179 2577 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:07:33.744748 kubelet[2577]: I0302 13:07:33.744676 2577 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:07:33.748435 kubelet[2577]: I0302 13:07:33.748408 2577 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:07:33.763916 kubelet[2577]: I0302 13:07:33.763688 2577 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:07:33.765669 kubelet[2577]: I0302 13:07:33.765596 2577 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:07:33.779981 kubelet[2577]: I0302 13:07:33.778496 2577 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:07:33.779981 kubelet[2577]: I0302 13:07:33.778612 2577 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:07:33.779981 kubelet[2577]: I0302 13:07:33.779105 2577 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:07:33.779981 kubelet[2577]: I0302 13:07:33.779378 2577 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:07:33.779981 kubelet[2577]: I0302 13:07:33.779400 2577 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:07:33.795920 kubelet[2577]: E0302 13:07:33.794902 2577 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:07:33.815413 kubelet[2577]: I0302 13:07:33.815185 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:07:33.819622 kubelet[2577]: I0302 13:07:33.819056 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:07:33.819622 kubelet[2577]: I0302 13:07:33.819124 2577 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:07:33.819622 kubelet[2577]: I0302 13:07:33.819161 2577 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:07:33.819622 kubelet[2577]: E0302 13:07:33.819302 2577 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:07:33.957655 kubelet[2577]: E0302 13:07:33.954623 2577 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:07:34.102996 kubelet[2577]: I0302 13:07:34.102918 2577 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:07:34.102996 kubelet[2577]: I0302 13:07:34.102980 2577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:07:34.102996 kubelet[2577]: I0302 13:07:34.103013 2577 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103229 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103248 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103276 2577 policy_none.go:49] "None policy: Start" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103337 2577 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103358 2577 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103503 2577 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:07:34.104185 kubelet[2577]: I0302 13:07:34.103521 2577 policy_none.go:47] "Start" Mar 2 13:07:34.158302 kubelet[2577]: E0302 13:07:34.155834 2577 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:07:34.171246 kubelet[2577]: E0302 13:07:34.161039 2577 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:07:34.171246 kubelet[2577]: I0302 13:07:34.163673 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:07:34.171246 kubelet[2577]: I0302 13:07:34.166522 2577 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:07:34.171246 kubelet[2577]: I0302 13:07:34.170444 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:07:34.196398 kubelet[2577]: E0302 13:07:34.192232 2577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:07:34.361973 kubelet[2577]: I0302 13:07:34.361745 2577 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:34.400906 kubelet[2577]: I0302 13:07:34.399257 2577 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:07:34.400906 kubelet[2577]: I0302 13:07:34.399441 2577 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:07:34.573014 kubelet[2577]: I0302 13:07:34.572477 2577 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.582118 kubelet[2577]: I0302 13:07:34.576086 2577 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:34.582118 kubelet[2577]: I0302 13:07:34.576503 2577 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:34.586930 kubelet[2577]: I0302 13:07:34.584992 2577 apiserver.go:52] "Watching apiserver" Mar 2 13:07:34.650112 kubelet[2577]: I0302 13:07:34.649334 2577 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:07:34.687156 kubelet[2577]: I0302 13:07:34.686609 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:34.687156 kubelet[2577]: I0302 13:07:34.686956 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:34.687156 kubelet[2577]: I0302 13:07:34.687239 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.687156 kubelet[2577]: I0302 13:07:34.687286 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:34.687156 kubelet[2577]: I0302 13:07:34.687322 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.690452 kubelet[2577]: I0302 13:07:34.687412 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.690452 kubelet[2577]: I0302 13:07:34.687438 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.690452 kubelet[2577]: I0302 13:07:34.687465 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:34.690452 kubelet[2577]: I0302 13:07:34.687534 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:34.890060 kubelet[2577]: E0302 13:07:34.889667 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:34.895621 kubelet[2577]: E0302 13:07:34.894747 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:34.900421 kubelet[2577]: E0302 13:07:34.899741 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:35.047069 kubelet[2577]: E0302 13:07:35.046505 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:35.047069 kubelet[2577]: E0302 13:07:35.046548 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:35.050610 kubelet[2577]: E0302 13:07:35.048616 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:35.076743 kubelet[2577]: I0302 13:07:35.076198 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.07617269 podStartE2EDuration="1.07617269s" podCreationTimestamp="2026-03-02 13:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:07:35.004466843 +0000 UTC m=+1.602998072" watchObservedRunningTime="2026-03-02 13:07:35.07617269 +0000 UTC m=+1.674703908" Mar 2 13:07:35.101716 kubelet[2577]: I0302 13:07:35.101109 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.101091571 podStartE2EDuration="1.101091571s" podCreationTimestamp="2026-03-02 13:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:07:35.077487352 +0000 UTC m=+1.676018581" watchObservedRunningTime="2026-03-02 13:07:35.101091571 +0000 UTC m=+1.699622790" Mar 2 13:07:35.101716 kubelet[2577]: I0302 13:07:35.101194 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.101187649 podStartE2EDuration="1.101187649s" podCreationTimestamp="2026-03-02 13:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:07:35.098760763 +0000 UTC m=+1.697292022" watchObservedRunningTime="2026-03-02 13:07:35.101187649 +0000 UTC m=+1.699718869" Mar 2 13:07:36.112595 kubelet[2577]: E0302 13:07:36.111742 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:36.116185 kubelet[2577]: E0302 13:07:36.116151 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:37.111963 kubelet[2577]: E0302 13:07:37.111473 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:37.674850 kubelet[2577]: I0302 13:07:37.673730 2577 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:07:37.680712 containerd[1474]: time="2026-03-02T13:07:37.674302806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:07:37.681705 kubelet[2577]: I0302 13:07:37.676981 2577 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:07:38.266116 kubelet[2577]: I0302 13:07:38.264438 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54778f20-f7f1-4e42-b676-8a2e52dc257e-kube-proxy\") pod \"kube-proxy-ppr2k\" (UID: \"54778f20-f7f1-4e42-b676-8a2e52dc257e\") " pod="kube-system/kube-proxy-ppr2k" Mar 2 13:07:38.266116 kubelet[2577]: I0302 13:07:38.264482 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54778f20-f7f1-4e42-b676-8a2e52dc257e-xtables-lock\") pod \"kube-proxy-ppr2k\" (UID: \"54778f20-f7f1-4e42-b676-8a2e52dc257e\") " pod="kube-system/kube-proxy-ppr2k" Mar 2 13:07:38.266116 kubelet[2577]: I0302 13:07:38.264507 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltb8m\" (UniqueName: \"kubernetes.io/projected/54778f20-f7f1-4e42-b676-8a2e52dc257e-kube-api-access-ltb8m\") pod \"kube-proxy-ppr2k\" (UID: \"54778f20-f7f1-4e42-b676-8a2e52dc257e\") " pod="kube-system/kube-proxy-ppr2k" Mar 2 13:07:38.266116 kubelet[2577]: I0302 13:07:38.264543 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54778f20-f7f1-4e42-b676-8a2e52dc257e-lib-modules\") pod \"kube-proxy-ppr2k\" (UID: \"54778f20-f7f1-4e42-b676-8a2e52dc257e\") " pod="kube-system/kube-proxy-ppr2k" Mar 2 13:07:38.267635 systemd[1]: Created slice kubepods-besteffort-pod54778f20_f7f1_4e42_b676_8a2e52dc257e.slice - libcontainer container kubepods-besteffort-pod54778f20_f7f1_4e42_b676_8a2e52dc257e.slice. Mar 2 13:07:38.436262 kubelet[2577]: E0302 13:07:38.436165 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:38.592199 kubelet[2577]: E0302 13:07:38.592094 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:38.595162 containerd[1474]: time="2026-03-02T13:07:38.595051950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppr2k,Uid:54778f20-f7f1-4e42-b676-8a2e52dc257e,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:38.703707 containerd[1474]: time="2026-03-02T13:07:38.702758596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:38.709362 containerd[1474]: time="2026-03-02T13:07:38.709018567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:38.710351 containerd[1474]: time="2026-03-02T13:07:38.710077455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:38.719913 containerd[1474]: time="2026-03-02T13:07:38.719359081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:38.771262 kubelet[2577]: I0302 13:07:38.771101 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/daab0c52-939c-4745-bb38-8ded8d2ab99d-run\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.771262 kubelet[2577]: I0302 13:07:38.771175 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/daab0c52-939c-4745-bb38-8ded8d2ab99d-cni-plugin\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.771262 kubelet[2577]: I0302 13:07:38.771199 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/daab0c52-939c-4745-bb38-8ded8d2ab99d-cni\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.782754 systemd[1]: Created slice kubepods-burstable-poddaab0c52_939c_4745_bb38_8ded8d2ab99d.slice - libcontainer container kubepods-burstable-poddaab0c52_939c_4745_bb38_8ded8d2ab99d.slice. Mar 2 13:07:38.847028 systemd[1]: Started cri-containerd-23aab7acd796f2f0d208f60b58477062198336b3061299e7c143e00f7c7bd517.scope - libcontainer container 23aab7acd796f2f0d208f60b58477062198336b3061299e7c143e00f7c7bd517. Mar 2 13:07:38.873153 kubelet[2577]: I0302 13:07:38.873050 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/daab0c52-939c-4745-bb38-8ded8d2ab99d-flannel-cfg\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.873153 kubelet[2577]: I0302 13:07:38.873136 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltfz\" (UniqueName: \"kubernetes.io/projected/daab0c52-939c-4745-bb38-8ded8d2ab99d-kube-api-access-gltfz\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.873353 kubelet[2577]: I0302 13:07:38.873214 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/daab0c52-939c-4745-bb38-8ded8d2ab99d-xtables-lock\") pod \"kube-flannel-ds-4tmrf\" (UID: \"daab0c52-939c-4745-bb38-8ded8d2ab99d\") " pod="kube-flannel/kube-flannel-ds-4tmrf" Mar 2 13:07:38.947617 containerd[1474]: time="2026-03-02T13:07:38.947564845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppr2k,Uid:54778f20-f7f1-4e42-b676-8a2e52dc257e,Namespace:kube-system,Attempt:0,} returns sandbox id \"23aab7acd796f2f0d208f60b58477062198336b3061299e7c143e00f7c7bd517\"" Mar 2 13:07:38.954338 kubelet[2577]: E0302 13:07:38.954206 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:38.973279 containerd[1474]: time="2026-03-02T13:07:38.973140501Z" level=info msg="CreateContainer within sandbox \"23aab7acd796f2f0d208f60b58477062198336b3061299e7c143e00f7c7bd517\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:07:38.973560 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 2 13:07:38.982528 sshd[1598]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:38.993941 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:57818.service: Deactivated successfully. Mar 2 13:07:39.011176 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:07:39.011508 systemd[1]: session-5.scope: Consumed 24.028s CPU time, 161.4M memory peak, 0B memory swap peak. Mar 2 13:07:39.017523 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:07:39.027657 containerd[1474]: time="2026-03-02T13:07:39.027253746Z" level=info msg="CreateContainer within sandbox \"23aab7acd796f2f0d208f60b58477062198336b3061299e7c143e00f7c7bd517\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c22120b42d1b5f4cb3909b1534e852c78e2a95d3c95f583768210382e2594df9\"" Mar 2 13:07:39.028311 systemd-logind[1457]: Removed session 5. Mar 2 13:07:39.030668 containerd[1474]: time="2026-03-02T13:07:39.029121851Z" level=info msg="StartContainer for \"c22120b42d1b5f4cb3909b1534e852c78e2a95d3c95f583768210382e2594df9\"" Mar 2 13:07:39.098218 kubelet[2577]: E0302 13:07:39.097579 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:39.100628 containerd[1474]: time="2026-03-02T13:07:39.099915709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4tmrf,Uid:daab0c52-939c-4745-bb38-8ded8d2ab99d,Namespace:kube-flannel,Attempt:0,}" Mar 2 13:07:39.104649 systemd[1]: Started cri-containerd-c22120b42d1b5f4cb3909b1534e852c78e2a95d3c95f583768210382e2594df9.scope - libcontainer container c22120b42d1b5f4cb3909b1534e852c78e2a95d3c95f583768210382e2594df9. Mar 2 13:07:39.141958 kubelet[2577]: E0302 13:07:39.141368 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:39.185679 containerd[1474]: time="2026-03-02T13:07:39.185418996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:39.186087 containerd[1474]: time="2026-03-02T13:07:39.185634655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:39.186087 containerd[1474]: time="2026-03-02T13:07:39.185662615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:39.187581 containerd[1474]: time="2026-03-02T13:07:39.186609233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:39.198343 containerd[1474]: time="2026-03-02T13:07:39.198258762Z" level=info msg="StartContainer for \"c22120b42d1b5f4cb3909b1534e852c78e2a95d3c95f583768210382e2594df9\" returns successfully" Mar 2 13:07:39.230122 systemd[1]: Started cri-containerd-b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2.scope - libcontainer container b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2. Mar 2 13:07:39.331686 containerd[1474]: time="2026-03-02T13:07:39.331498162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4tmrf,Uid:daab0c52-939c-4745-bb38-8ded8d2ab99d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\"" Mar 2 13:07:39.333174 kubelet[2577]: E0302 13:07:39.333066 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:39.343170 containerd[1474]: time="2026-03-02T13:07:39.343112676Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 2 13:07:40.175140 kubelet[2577]: E0302 13:07:40.174972 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:40.215890 kubelet[2577]: I0302 13:07:40.215320 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ppr2k" podStartSLOduration=2.21529711 podStartE2EDuration="2.21529711s" podCreationTimestamp="2026-03-02 13:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:07:40.211015568 +0000 UTC m=+6.809546817" watchObservedRunningTime="2026-03-02 13:07:40.21529711 +0000 UTC m=+6.813828339" Mar 2 13:07:40.299275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237341861.mount: Deactivated successfully. Mar 2 13:07:40.407391 containerd[1474]: time="2026-03-02T13:07:40.406978289Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.409672 containerd[1474]: time="2026-03-02T13:07:40.409558603Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 2 13:07:40.413075 containerd[1474]: time="2026-03-02T13:07:40.412888736Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.420751 containerd[1474]: time="2026-03-02T13:07:40.418034284Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.420751 containerd[1474]: time="2026-03-02T13:07:40.420505471Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.077336033s" Mar 2 13:07:40.420751 containerd[1474]: time="2026-03-02T13:07:40.420559018Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 2 13:07:40.437550 containerd[1474]: time="2026-03-02T13:07:40.437213259Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 2 13:07:40.484592 containerd[1474]: time="2026-03-02T13:07:40.484421579Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721\"" Mar 2 13:07:40.485906 containerd[1474]: time="2026-03-02T13:07:40.485836290Z" level=info msg="StartContainer for \"bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721\"" Mar 2 13:07:40.615454 systemd[1]: Started cri-containerd-bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721.scope - libcontainer container bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721. Mar 2 13:07:40.687915 containerd[1474]: time="2026-03-02T13:07:40.687646509Z" level=info msg="StartContainer for \"bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721\" returns successfully" Mar 2 13:07:40.688419 systemd[1]: cri-containerd-bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721.scope: Deactivated successfully. Mar 2 13:07:40.882493 containerd[1474]: time="2026-03-02T13:07:40.881994414Z" level=info msg="shim disconnected" id=bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721 namespace=k8s.io Mar 2 13:07:40.882493 containerd[1474]: time="2026-03-02T13:07:40.882258470Z" level=warning msg="cleaning up after shim disconnected" id=bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721 namespace=k8s.io Mar 2 13:07:40.882493 containerd[1474]: time="2026-03-02T13:07:40.882287272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:41.261476 kubelet[2577]: E0302 13:07:41.260383 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:41.265336 kubelet[2577]: E0302 13:07:41.265154 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:41.472158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea94e8021c1ff1bac5f651432d88a6b5c63678bb1066c3f480951e52e89d721-rootfs.mount: Deactivated successfully. Mar 2 13:07:42.247996 kubelet[2577]: E0302 13:07:42.246754 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:42.270034 kubelet[2577]: E0302 13:07:42.269701 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:42.270034 kubelet[2577]: E0302 13:07:42.269733 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:42.273059 containerd[1474]: time="2026-03-02T13:07:42.272625616Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 2 13:07:49.895846 containerd[1474]: time="2026-03-02T13:07:49.887653650Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:49.895846 containerd[1474]: time="2026-03-02T13:07:49.894935246Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 2 13:07:49.983760 containerd[1474]: time="2026-03-02T13:07:49.954753562Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:49.994202 containerd[1474]: time="2026-03-02T13:07:49.993564752Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:49.998123 containerd[1474]: time="2026-03-02T13:07:49.998071353Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 7.725354352s" Mar 2 13:07:49.998960 containerd[1474]: time="2026-03-02T13:07:49.998130510Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 2 13:07:50.020026 containerd[1474]: time="2026-03-02T13:07:50.019464911Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:07:50.095065 containerd[1474]: time="2026-03-02T13:07:50.094946350Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a\"" Mar 2 13:07:50.099012 containerd[1474]: time="2026-03-02T13:07:50.098885730Z" level=info msg="StartContainer for \"4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a\"" Mar 2 13:07:50.464294 systemd[1]: Started cri-containerd-4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a.scope - libcontainer container 4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a. Mar 2 13:07:50.800938 systemd[1]: cri-containerd-4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a.scope: Deactivated successfully. Mar 2 13:07:50.886986 kubelet[2577]: I0302 13:07:50.886695 2577 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 13:07:50.936189 containerd[1474]: time="2026-03-02T13:07:50.935495362Z" level=info msg="StartContainer for \"4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a\" returns successfully" Mar 2 13:07:51.112750 systemd[1]: Created slice kubepods-burstable-pod02ad0b84_615f_4abc_a8ac_fbb083d34da8.slice - libcontainer container kubepods-burstable-pod02ad0b84_615f_4abc_a8ac_fbb083d34da8.slice. Mar 2 13:07:51.147444 kubelet[2577]: I0302 13:07:51.147394 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02ad0b84-615f-4abc-a8ac-fbb083d34da8-config-volume\") pod \"coredns-66bc5c9577-4jhpl\" (UID: \"02ad0b84-615f-4abc-a8ac-fbb083d34da8\") " pod="kube-system/coredns-66bc5c9577-4jhpl" Mar 2 13:07:51.149063 systemd[1]: Created slice kubepods-burstable-podd1285e73_efbe_430f_8ee6_798e24a09b23.slice - libcontainer container kubepods-burstable-podd1285e73_efbe_430f_8ee6_798e24a09b23.slice. Mar 2 13:07:51.149949 kubelet[2577]: I0302 13:07:51.149164 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhcmd\" (UniqueName: \"kubernetes.io/projected/02ad0b84-615f-4abc-a8ac-fbb083d34da8-kube-api-access-qhcmd\") pod \"coredns-66bc5c9577-4jhpl\" (UID: \"02ad0b84-615f-4abc-a8ac-fbb083d34da8\") " pod="kube-system/coredns-66bc5c9577-4jhpl" Mar 2 13:07:51.166652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a-rootfs.mount: Deactivated successfully. Mar 2 13:07:51.250552 kubelet[2577]: I0302 13:07:51.250304 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mps4\" (UniqueName: \"kubernetes.io/projected/d1285e73-efbe-430f-8ee6-798e24a09b23-kube-api-access-4mps4\") pod \"coredns-66bc5c9577-p5h6r\" (UID: \"d1285e73-efbe-430f-8ee6-798e24a09b23\") " pod="kube-system/coredns-66bc5c9577-p5h6r" Mar 2 13:07:51.250552 kubelet[2577]: I0302 13:07:51.250412 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1285e73-efbe-430f-8ee6-798e24a09b23-config-volume\") pod \"coredns-66bc5c9577-p5h6r\" (UID: \"d1285e73-efbe-430f-8ee6-798e24a09b23\") " pod="kube-system/coredns-66bc5c9577-p5h6r" Mar 2 13:07:51.368371 containerd[1474]: time="2026-03-02T13:07:51.366919618Z" level=info msg="shim disconnected" id=4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a namespace=k8s.io Mar 2 13:07:51.368371 containerd[1474]: time="2026-03-02T13:07:51.367019831Z" level=warning msg="cleaning up after shim disconnected" id=4c95bf78697eb24a33b55561c496f0c9309a5deb66f4afa25d61c03029d2be2a namespace=k8s.io Mar 2 13:07:51.368371 containerd[1474]: time="2026-03-02T13:07:51.367042713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:51.464278 kubelet[2577]: E0302 13:07:51.464043 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:51.470729 containerd[1474]: time="2026-03-02T13:07:51.470516478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jhpl,Uid:02ad0b84-615f-4abc-a8ac-fbb083d34da8,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:51.475717 containerd[1474]: time="2026-03-02T13:07:51.475635362Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:07:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:07:51.476984 kubelet[2577]: E0302 13:07:51.476740 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:51.480271 containerd[1474]: time="2026-03-02T13:07:51.480071683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5h6r,Uid:d1285e73-efbe-430f-8ee6-798e24a09b23,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:51.847823 kubelet[2577]: E0302 13:07:51.847315 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:51.879625 containerd[1474]: time="2026-03-02T13:07:51.879365584Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 2 13:07:51.958883 containerd[1474]: time="2026-03-02T13:07:51.958685459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jhpl,Uid:02ad0b84-615f-4abc-a8ac-fbb083d34da8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:07:51.959678 kubelet[2577]: E0302 13:07:51.959133 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:07:51.959678 kubelet[2577]: E0302 13:07:51.959297 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-4jhpl" Mar 2 13:07:51.959678 kubelet[2577]: E0302 13:07:51.959326 2577 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-4jhpl" Mar 2 13:07:51.959678 kubelet[2577]: E0302 13:07:51.959393 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4jhpl_kube-system(02ad0b84-615f-4abc-a8ac-fbb083d34da8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4jhpl_kube-system(02ad0b84-615f-4abc-a8ac-fbb083d34da8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-4jhpl" podUID="02ad0b84-615f-4abc-a8ac-fbb083d34da8" Mar 2 13:07:51.991111 containerd[1474]: time="2026-03-02T13:07:51.988149867Z" level=info msg="CreateContainer within sandbox \"b9909d19ea2fecba296d0ae416e410146f063a43c58329c9f8a80d88e09e83d2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"1adb68ba3a673b4d92a77944c17cd23e0a9ecb483c61fc028c03d9140a081e0a\"" Mar 2 13:07:51.991111 containerd[1474]: time="2026-03-02T13:07:51.989374299Z" level=info msg="StartContainer for \"1adb68ba3a673b4d92a77944c17cd23e0a9ecb483c61fc028c03d9140a081e0a\"" Mar 2 13:07:51.996328 containerd[1474]: time="2026-03-02T13:07:51.996091704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5h6r,Uid:d1285e73-efbe-430f-8ee6-798e24a09b23,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf2efd1b821627c5d6678d11f9e33ee5a927f3218b99e2dac7980f9fe6cdfb1e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:07:51.998108 kubelet[2577]: E0302 13:07:51.997977 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2efd1b821627c5d6678d11f9e33ee5a927f3218b99e2dac7980f9fe6cdfb1e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:07:51.998108 kubelet[2577]: E0302 13:07:51.998088 2577 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2efd1b821627c5d6678d11f9e33ee5a927f3218b99e2dac7980f9fe6cdfb1e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-p5h6r" Mar 2 13:07:51.998321 kubelet[2577]: E0302 13:07:51.998120 2577 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2efd1b821627c5d6678d11f9e33ee5a927f3218b99e2dac7980f9fe6cdfb1e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-p5h6r" Mar 2 13:07:51.998321 kubelet[2577]: E0302 13:07:51.998236 2577 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-p5h6r_kube-system(d1285e73-efbe-430f-8ee6-798e24a09b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-p5h6r_kube-system(d1285e73-efbe-430f-8ee6-798e24a09b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf2efd1b821627c5d6678d11f9e33ee5a927f3218b99e2dac7980f9fe6cdfb1e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-p5h6r" podUID="d1285e73-efbe-430f-8ee6-798e24a09b23" Mar 2 13:07:52.190217 systemd[1]: run-netns-cni\x2df110d494\x2dd734\x2d410d\x2d4372\x2dd7ecfdfca88b.mount: Deactivated successfully. Mar 2 13:07:52.211615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74a84107f768e9f88a007f5eb2f4709352d3f08d528a4d743a8363fb7df16dbe-shm.mount: Deactivated successfully. Mar 2 13:07:52.297389 systemd[1]: Started cri-containerd-1adb68ba3a673b4d92a77944c17cd23e0a9ecb483c61fc028c03d9140a081e0a.scope - libcontainer container 1adb68ba3a673b4d92a77944c17cd23e0a9ecb483c61fc028c03d9140a081e0a. Mar 2 13:07:52.389903 containerd[1474]: time="2026-03-02T13:07:52.389598354Z" level=info msg="StartContainer for \"1adb68ba3a673b4d92a77944c17cd23e0a9ecb483c61fc028c03d9140a081e0a\" returns successfully" Mar 2 13:07:52.875047 kubelet[2577]: E0302 13:07:52.872754 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:52.907646 kubelet[2577]: I0302 13:07:52.906659 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4tmrf" podStartSLOduration=4.24614423 podStartE2EDuration="14.906639959s" podCreationTimestamp="2026-03-02 13:07:38 +0000 UTC" firstStartedPulling="2026-03-02 13:07:39.342178736 +0000 UTC m=+5.940709955" lastFinishedPulling="2026-03-02 13:07:50.002674436 +0000 UTC m=+16.601205684" observedRunningTime="2026-03-02 13:07:52.906580778 +0000 UTC m=+19.505112028" watchObservedRunningTime="2026-03-02 13:07:52.906639959 +0000 UTC m=+19.505171179" Mar 2 13:07:53.833589 systemd-networkd[1401]: flannel.1: Link UP Mar 2 13:07:53.833607 systemd-networkd[1401]: flannel.1: Gained carrier Mar 2 13:07:53.973456 kubelet[2577]: E0302 13:07:53.969674 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:55.546990 systemd-networkd[1401]: flannel.1: Gained IPv6LL Mar 2 13:08:04.858901 kubelet[2577]: E0302 13:08:04.858682 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:04.862663 containerd[1474]: time="2026-03-02T13:08:04.862409185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5h6r,Uid:d1285e73-efbe-430f-8ee6-798e24a09b23,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:04.997209 systemd-networkd[1401]: cni0: Link UP Mar 2 13:08:04.997225 systemd-networkd[1401]: cni0: Gained carrier Mar 2 13:08:05.002700 systemd-networkd[1401]: cni0: Lost carrier Mar 2 13:08:05.070187 systemd-networkd[1401]: veth6957338d: Link UP Mar 2 13:08:05.082951 kernel: cni0: port 1(veth6957338d) entered blocking state Mar 2 13:08:05.083983 kernel: cni0: port 1(veth6957338d) entered disabled state Mar 2 13:08:05.084073 kernel: veth6957338d: entered allmulticast mode Mar 2 13:08:05.093973 kernel: veth6957338d: entered promiscuous mode Mar 2 13:08:05.108013 kernel: cni0: port 1(veth6957338d) entered blocking state Mar 2 13:08:05.108101 kernel: cni0: port 1(veth6957338d) entered forwarding state Mar 2 13:08:05.114363 kernel: cni0: port 1(veth6957338d) entered disabled state Mar 2 13:08:05.149107 kernel: cni0: port 1(veth6957338d) entered blocking state Mar 2 13:08:05.149259 kernel: cni0: port 1(veth6957338d) entered forwarding state Mar 2 13:08:05.149427 systemd-networkd[1401]: veth6957338d: Gained carrier Mar 2 13:08:05.150457 systemd-networkd[1401]: cni0: Gained carrier Mar 2 13:08:05.169898 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001807e0), "name":"cbr0", "type":"bridge"} Mar 2 13:08:05.169898 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Mar 2 13:08:05.284522 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-02T13:08:05.281433759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:05.284522 containerd[1474]: time="2026-03-02T13:08:05.281669432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:05.284522 containerd[1474]: time="2026-03-02T13:08:05.281694949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:05.284522 containerd[1474]: time="2026-03-02T13:08:05.282023402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:05.491532 systemd[1]: Started cri-containerd-87cdcbd47070416ed16b843710397dcbc412cc076f7b698c504683f36dff3cc1.scope - libcontainer container 87cdcbd47070416ed16b843710397dcbc412cc076f7b698c504683f36dff3cc1. Mar 2 13:08:05.646169 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:08:05.867051 containerd[1474]: time="2026-03-02T13:08:05.866986940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5h6r,Uid:d1285e73-efbe-430f-8ee6-798e24a09b23,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cdcbd47070416ed16b843710397dcbc412cc076f7b698c504683f36dff3cc1\"" Mar 2 13:08:05.871108 kubelet[2577]: E0302 13:08:05.871070 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:05.891551 containerd[1474]: time="2026-03-02T13:08:05.888660670Z" level=info msg="CreateContainer within sandbox \"87cdcbd47070416ed16b843710397dcbc412cc076f7b698c504683f36dff3cc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:08:06.069445 containerd[1474]: time="2026-03-02T13:08:06.069292188Z" level=info msg="CreateContainer within sandbox \"87cdcbd47070416ed16b843710397dcbc412cc076f7b698c504683f36dff3cc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdb930e8895c1487bb488d0d749984ca74509c43b8142561cc88f1102e2ae83d\"" Mar 2 13:08:06.071047 containerd[1474]: time="2026-03-02T13:08:06.070745105Z" level=info msg="StartContainer for \"cdb930e8895c1487bb488d0d749984ca74509c43b8142561cc88f1102e2ae83d\"" Mar 2 13:08:06.473090 systemd-networkd[1401]: veth6957338d: Gained IPv6LL Mar 2 13:08:06.606341 systemd[1]: Started cri-containerd-cdb930e8895c1487bb488d0d749984ca74509c43b8142561cc88f1102e2ae83d.scope - libcontainer container cdb930e8895c1487bb488d0d749984ca74509c43b8142561cc88f1102e2ae83d. Mar 2 13:08:06.850248 kubelet[2577]: E0302 13:08:06.849950 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:06.853975 containerd[1474]: time="2026-03-02T13:08:06.851605258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jhpl,Uid:02ad0b84-615f-4abc-a8ac-fbb083d34da8,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:06.853975 containerd[1474]: time="2026-03-02T13:08:06.852202787Z" level=info msg="StartContainer for \"cdb930e8895c1487bb488d0d749984ca74509c43b8142561cc88f1102e2ae83d\" returns successfully" Mar 2 13:08:06.959225 systemd-networkd[1401]: veth09f0f976: Link UP Mar 2 13:08:06.972270 kernel: cni0: port 2(veth09f0f976) entered blocking state Mar 2 13:08:06.976304 kernel: cni0: port 2(veth09f0f976) entered disabled state Mar 2 13:08:06.976362 kernel: veth09f0f976: entered allmulticast mode Mar 2 13:08:06.988028 kernel: veth09f0f976: entered promiscuous mode Mar 2 13:08:07.049680 systemd-networkd[1401]: cni0: Gained IPv6LL Mar 2 13:08:07.072937 kernel: cni0: port 2(veth09f0f976) entered blocking state Mar 2 13:08:07.073149 kernel: cni0: port 2(veth09f0f976) entered forwarding state Mar 2 13:08:07.073308 systemd-networkd[1401]: veth09f0f976: Gained carrier Mar 2 13:08:07.076753 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a2950), "name":"cbr0", "type":"bridge"} Mar 2 13:08:07.076753 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Mar 2 13:08:07.141996 kubelet[2577]: E0302 13:08:07.140244 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:07.294711 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-02T13:08:07.292573927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:07.294711 containerd[1474]: time="2026-03-02T13:08:07.292886471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:07.294711 containerd[1474]: time="2026-03-02T13:08:07.292912128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:07.294711 containerd[1474]: time="2026-03-02T13:08:07.293071320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:07.391338 kubelet[2577]: I0302 13:08:07.391191 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p5h6r" podStartSLOduration=29.391160942 podStartE2EDuration="29.391160942s" podCreationTimestamp="2026-03-02 13:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:07.287644944 +0000 UTC m=+33.886176173" watchObservedRunningTime="2026-03-02 13:08:07.391160942 +0000 UTC m=+33.989692161" Mar 2 13:08:07.562663 systemd[1]: Started cri-containerd-86b08176169af1c1754b2a3aea591cc5fd51155ab039de72feb84d2ee9ad666d.scope - libcontainer container 86b08176169af1c1754b2a3aea591cc5fd51155ab039de72feb84d2ee9ad666d. Mar 2 13:08:07.698898 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:08:07.773710 containerd[1474]: time="2026-03-02T13:08:07.773570278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4jhpl,Uid:02ad0b84-615f-4abc-a8ac-fbb083d34da8,Namespace:kube-system,Attempt:0,} returns sandbox id \"86b08176169af1c1754b2a3aea591cc5fd51155ab039de72feb84d2ee9ad666d\"" Mar 2 13:08:07.778446 kubelet[2577]: E0302 13:08:07.778123 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:07.840207 containerd[1474]: time="2026-03-02T13:08:07.839207954Z" level=info msg="CreateContainer within sandbox \"86b08176169af1c1754b2a3aea591cc5fd51155ab039de72feb84d2ee9ad666d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:08:07.883087 containerd[1474]: time="2026-03-02T13:08:07.882753669Z" level=info msg="CreateContainer within sandbox \"86b08176169af1c1754b2a3aea591cc5fd51155ab039de72feb84d2ee9ad666d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2fc5d987b50a1f74fe432ad3d68ce4a9adea347bc45d91464272c279b3fcf60\"" Mar 2 13:08:07.886436 containerd[1474]: time="2026-03-02T13:08:07.886237538Z" level=info msg="StartContainer for \"e2fc5d987b50a1f74fe432ad3d68ce4a9adea347bc45d91464272c279b3fcf60\"" Mar 2 13:08:08.167752 systemd[1]: Started cri-containerd-e2fc5d987b50a1f74fe432ad3d68ce4a9adea347bc45d91464272c279b3fcf60.scope - libcontainer container e2fc5d987b50a1f74fe432ad3d68ce4a9adea347bc45d91464272c279b3fcf60. Mar 2 13:08:08.185119 kubelet[2577]: E0302 13:08:08.184477 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:08.319103 containerd[1474]: time="2026-03-02T13:08:08.318690259Z" level=info msg="StartContainer for \"e2fc5d987b50a1f74fe432ad3d68ce4a9adea347bc45d91464272c279b3fcf60\" returns successfully" Mar 2 13:08:08.775364 systemd-networkd[1401]: veth09f0f976: Gained IPv6LL Mar 2 13:08:09.247483 kubelet[2577]: E0302 13:08:09.247263 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:09.247483 kubelet[2577]: E0302 13:08:09.247320 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:09.313134 kubelet[2577]: I0302 13:08:09.312698 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4jhpl" podStartSLOduration=31.312391264 podStartE2EDuration="31.312391264s" podCreationTimestamp="2026-03-02 13:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:09.303062237 +0000 UTC m=+35.901593487" watchObservedRunningTime="2026-03-02 13:08:09.312391264 +0000 UTC m=+35.910922502" Mar 2 13:08:10.251000 kubelet[2577]: E0302 13:08:10.250727 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:11.262129 kubelet[2577]: E0302 13:08:11.261723 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:38.274659 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:54208.service - OpenSSH per-connection server daemon (10.0.0.1:54208). Mar 2 13:08:38.406019 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 54208 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:38.412963 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:38.436244 systemd-logind[1457]: New session 6 of user core. Mar 2 13:08:38.455418 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:08:38.714058 sshd[3641]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:38.728703 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:54208.service: Deactivated successfully. Mar 2 13:08:38.740622 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:08:38.743959 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:08:38.746999 systemd-logind[1457]: Removed session 6. Mar 2 13:08:40.820673 kubelet[2577]: E0302 13:08:40.820294 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:41.822057 kubelet[2577]: E0302 13:08:41.821944 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:43.751138 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:39904.service - OpenSSH per-connection server daemon (10.0.0.1:39904). Mar 2 13:08:43.799081 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 39904 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:43.802091 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:43.813432 systemd-logind[1457]: New session 7 of user core. Mar 2 13:08:43.825179 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:08:44.020543 sshd[3685]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:44.033544 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:39904.service: Deactivated successfully. Mar 2 13:08:44.037389 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:08:44.040074 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:08:44.043532 systemd-logind[1457]: Removed session 7. Mar 2 13:08:46.821758 kubelet[2577]: E0302 13:08:46.820880 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:49.033970 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:38598.service - OpenSSH per-connection server daemon (10.0.0.1:38598). Mar 2 13:08:49.086342 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 38598 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:49.089281 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:49.097945 systemd-logind[1457]: New session 8 of user core. Mar 2 13:08:49.115285 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:08:49.286292 sshd[3720]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:49.297225 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:38598.service: Deactivated successfully. Mar 2 13:08:49.300674 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:08:49.307322 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:08:49.313425 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:38612.service - OpenSSH per-connection server daemon (10.0.0.1:38612). Mar 2 13:08:49.317960 systemd-logind[1457]: Removed session 8. Mar 2 13:08:49.361348 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 38612 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:49.364873 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:49.372983 systemd-logind[1457]: New session 9 of user core. Mar 2 13:08:49.385108 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:08:49.630674 sshd[3735]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:49.640009 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:38612.service: Deactivated successfully. Mar 2 13:08:49.643346 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:08:49.648959 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:08:49.663609 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Mar 2 13:08:49.666430 systemd-logind[1457]: Removed session 9. Mar 2 13:08:49.707406 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:49.710288 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:49.719723 systemd-logind[1457]: New session 10 of user core. Mar 2 13:08:49.731157 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:08:49.936616 sshd[3753]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:49.942697 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:38616.service: Deactivated successfully. Mar 2 13:08:49.946032 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:08:49.947695 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:08:49.950062 systemd-logind[1457]: Removed session 10. Mar 2 13:08:54.956155 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:38626.service - OpenSSH per-connection server daemon (10.0.0.1:38626). Mar 2 13:08:55.016573 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 38626 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:08:55.019592 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:08:55.029498 systemd-logind[1457]: New session 11 of user core. Mar 2 13:08:55.039135 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:08:55.231571 sshd[3788]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:55.238685 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:38626.service: Deactivated successfully. Mar 2 13:08:55.242403 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:08:55.244378 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:08:55.246358 systemd-logind[1457]: Removed session 11. Mar 2 13:08:56.847970 kubelet[2577]: E0302 13:08:56.847670 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:00.296231 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:40694.service - OpenSSH per-connection server daemon (10.0.0.1:40694). Mar 2 13:09:00.366735 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 40694 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:00.369928 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:00.379252 systemd-logind[1457]: New session 12 of user core. Mar 2 13:09:00.386196 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:09:00.610877 sshd[3822]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:00.641006 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:40694.service: Deactivated successfully. Mar 2 13:09:00.649570 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:09:00.654073 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:09:00.662585 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:40696.service - OpenSSH per-connection server daemon (10.0.0.1:40696). Mar 2 13:09:00.666133 systemd-logind[1457]: Removed session 12. Mar 2 13:09:00.741652 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 40696 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:00.747024 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:00.762667 systemd-logind[1457]: New session 13 of user core. Mar 2 13:09:00.770579 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:09:01.473878 sshd[3851]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:01.491923 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:40696.service: Deactivated successfully. Mar 2 13:09:01.495212 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:09:01.500344 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:09:01.513714 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:40700.service - OpenSSH per-connection server daemon (10.0.0.1:40700). Mar 2 13:09:01.516587 systemd-logind[1457]: Removed session 13. Mar 2 13:09:01.600581 sshd[3865]: Accepted publickey for core from 10.0.0.1 port 40700 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:01.607877 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:01.642870 systemd-logind[1457]: New session 14 of user core. Mar 2 13:09:01.650474 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:09:03.280866 sshd[3865]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:03.315654 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:40708.service - OpenSSH per-connection server daemon (10.0.0.1:40708). Mar 2 13:09:03.316646 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:40700.service: Deactivated successfully. Mar 2 13:09:03.368968 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:09:03.370250 systemd[1]: session-14.scope: Consumed 1.365s CPU time. Mar 2 13:09:03.374601 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:09:03.380390 systemd-logind[1457]: Removed session 14. Mar 2 13:09:03.897110 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 40708 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:03.904122 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:03.915531 systemd-logind[1457]: New session 15 of user core. Mar 2 13:09:03.957725 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:09:06.561539 sshd[3880]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:06.615500 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:40708.service: Deactivated successfully. Mar 2 13:09:06.657678 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:09:06.659523 systemd[1]: session-15.scope: Consumed 2.362s CPU time. Mar 2 13:09:06.663944 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:09:06.687174 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:40720.service - OpenSSH per-connection server daemon (10.0.0.1:40720). Mar 2 13:09:06.701221 systemd-logind[1457]: Removed session 15. Mar 2 13:09:07.117178 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 40720 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:07.187508 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:07.358399 systemd-logind[1457]: New session 16 of user core. Mar 2 13:09:07.393920 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:09:08.396128 sshd[3916]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:08.600360 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:40720.service: Deactivated successfully. Mar 2 13:09:08.677971 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:09:08.678900 systemd[1]: session-16.scope: Consumed 1.082s CPU time. Mar 2 13:09:08.712878 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:09:08.778315 systemd-logind[1457]: Removed session 16. Mar 2 13:09:11.902155 kubelet[2577]: E0302 13:09:11.901159 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:14.418595 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:46162.service - OpenSSH per-connection server daemon (10.0.0.1:46162). Mar 2 13:09:15.210313 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 46162 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:15.214226 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:15.254047 systemd-logind[1457]: New session 17 of user core. Mar 2 13:09:15.268356 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:09:15.720280 sshd[3953]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:15.738539 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:46162.service: Deactivated successfully. Mar 2 13:09:15.742224 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:09:15.749296 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:09:15.758369 systemd-logind[1457]: Removed session 17. Mar 2 13:09:16.821474 kubelet[2577]: E0302 13:09:16.820661 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:20.765884 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:33916.service - OpenSSH per-connection server daemon (10.0.0.1:33916). Mar 2 13:09:20.852151 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 33916 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:20.853547 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:20.879031 systemd-logind[1457]: New session 18 of user core. Mar 2 13:09:20.887390 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:09:21.266567 sshd[3994]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:21.273876 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:33916.service: Deactivated successfully. Mar 2 13:09:21.279910 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:09:21.283141 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:09:21.287196 systemd-logind[1457]: Removed session 18. Mar 2 13:09:26.317593 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:33924.service - OpenSSH per-connection server daemon (10.0.0.1:33924). Mar 2 13:09:26.391862 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 33924 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:26.396057 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:26.410870 systemd-logind[1457]: New session 19 of user core. Mar 2 13:09:26.433552 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:09:26.705461 sshd[4028]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:26.714140 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:33924.service: Deactivated successfully. Mar 2 13:09:26.738141 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:09:26.745298 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:09:26.750531 systemd-logind[1457]: Removed session 19. Mar 2 13:09:31.737320 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). Mar 2 13:09:31.846895 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:31.853750 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:31.869890 systemd-logind[1457]: New session 20 of user core. Mar 2 13:09:31.890468 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:09:32.180151 sshd[4076]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:32.193069 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:42238.service: Deactivated successfully. Mar 2 13:09:32.198152 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:09:32.203187 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:09:32.206270 systemd-logind[1457]: Removed session 20. Mar 2 13:09:32.825538 kubelet[2577]: E0302 13:09:32.824030 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:37.576325 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:42246.service - OpenSSH per-connection server daemon (10.0.0.1:42246). Mar 2 13:09:38.150953 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:38.183678 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:38.341875 systemd-logind[1457]: New session 21 of user core. Mar 2 13:09:38.391643 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:09:39.079002 sshd[4114]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:39.092558 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:42246.service: Deactivated successfully. Mar 2 13:09:39.096612 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:09:39.100299 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:09:39.116571 systemd-logind[1457]: Removed session 21. Mar 2 13:09:44.135596 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:58698.service - OpenSSH per-connection server daemon (10.0.0.1:58698). Mar 2 13:09:44.186712 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 58698 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:09:44.190666 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:09:44.206877 systemd-logind[1457]: New session 22 of user core. Mar 2 13:09:44.218473 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:09:44.582182 sshd[4151]: pam_unix(sshd:session): session closed for user core Mar 2 13:09:44.593392 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:58698.service: Deactivated successfully. Mar 2 13:09:44.597279 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:09:44.600955 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:09:44.603869 systemd-logind[1457]: Removed session 22.