Mar 14 00:12:07.182613 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:12:07.182653 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:07.182674 kernel: BIOS-provided physical RAM map: Mar 14 00:12:07.182685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:12:07.182694 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:12:07.182704 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:12:07.182715 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:12:07.182725 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:12:07.182734 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:12:07.182749 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:12:07.182759 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:12:07.182769 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:12:07.182822 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:12:07.182834 kernel: NX (Execute Disable) protection: active Mar 14 00:12:07.182846 kernel: APIC: Static calls initialized Mar 14 00:12:07.182896 kernel: SMBIOS 2.8 present. Mar 14 00:12:07.182907 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:12:07.182990 kernel: Hypervisor detected: KVM Mar 14 00:12:07.183001 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:12:07.183012 kernel: kvm-clock: using sched offset of 10906355911 cycles Mar 14 00:12:07.183023 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:12:07.183034 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:12:07.183044 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:12:07.183056 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:12:07.183071 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:12:07.183082 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:12:07.183093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:12:07.183104 kernel: Using GB pages for direct mapping Mar 14 00:12:07.183114 kernel: ACPI: Early table checksum verification disabled Mar 14 00:12:07.183125 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:12:07.183136 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183147 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183157 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183172 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:12:07.183182 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183193 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183204 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183214 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:07.183225 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:12:07.183236 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:12:07.183252 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:12:07.183267 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:12:07.183279 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:12:07.183290 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:12:07.183301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:12:07.183312 kernel: No NUMA configuration found Mar 14 00:12:07.183323 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:12:07.183338 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:12:07.183350 kernel: Zone ranges: Mar 14 00:12:07.183361 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:12:07.183372 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:12:07.183383 kernel: Normal empty Mar 14 00:12:07.183395 kernel: Movable zone start for each node Mar 14 00:12:07.183406 kernel: Early memory node ranges Mar 14 00:12:07.183417 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:12:07.183428 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:12:07.183439 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:12:07.183454 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:12:07.183553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:12:07.183568 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:12:07.183579 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:12:07.183590 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:12:07.183601 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:12:07.183612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:12:07.183624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:12:07.183635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:12:07.183654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:12:07.183666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:12:07.183679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:12:07.183690 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:12:07.183701 kernel: TSC deadline timer available Mar 14 00:12:07.183712 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:12:07.183723 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:12:07.183734 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:12:07.183784 kernel: kvm-guest: setup PV sched yield Mar 14 00:12:07.183802 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:12:07.183813 kernel: Booting paravirtualized kernel on KVM Mar 14 00:12:07.183825 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:12:07.183836 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:12:07.183848 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:12:07.183859 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:12:07.183870 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:12:07.183881 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:12:07.183892 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:12:07.183909 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:07.183987 kernel: random: crng init done Mar 14 00:12:07.183999 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:12:07.184010 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:12:07.184022 kernel: Fallback order for Node 0: 0 Mar 14 00:12:07.184034 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:12:07.184045 kernel: Policy zone: DMA32 Mar 14 00:12:07.184056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:12:07.184073 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 14 00:12:07.184084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:12:07.184095 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:12:07.184106 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:12:07.184118 kernel: Dynamic Preempt: voluntary Mar 14 00:12:07.185210 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:12:07.185421 kernel: rcu: RCU event tracing is enabled. Mar 14 00:12:07.185434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:12:07.185445 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:12:07.185559 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:12:07.185571 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:12:07.185583 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:12:07.185594 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:12:07.185643 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:12:07.185657 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:12:07.185669 kernel: Console: colour VGA+ 80x25 Mar 14 00:12:07.185682 kernel: printk: console [ttyS0] enabled Mar 14 00:12:07.185694 kernel: ACPI: Core revision 20230628 Mar 14 00:12:07.185711 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:12:07.185722 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:12:07.185734 kernel: x2apic enabled Mar 14 00:12:07.185745 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:12:07.185756 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:12:07.185768 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:12:07.185779 kernel: kvm-guest: setup PV IPIs Mar 14 00:12:07.185791 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:12:07.185858 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:12:07.185871 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:12:07.185883 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:12:07.185895 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:12:07.185910 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:12:07.185991 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:12:07.186003 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:12:07.186015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:12:07.186027 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:12:07.186044 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:12:07.186091 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:12:07.186105 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:12:07.186117 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:12:07.186129 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:12:07.186141 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:12:07.186153 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:12:07.186165 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:12:07.186182 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:12:07.187155 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:12:07.188087 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:12:07.188110 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:12:07.188123 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:12:07.188135 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:12:07.188147 kernel: landlock: Up and running. Mar 14 00:12:07.188159 kernel: SELinux: Initializing. Mar 14 00:12:07.188171 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:07.188288 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:07.188302 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:12:07.188315 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:12:07.188327 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:12:07.188339 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:12:07.188351 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:12:07.188363 kernel: signal: max sigframe size: 1776 Mar 14 00:12:07.188430 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:12:07.188445 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:12:07.188465 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:12:07.188477 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:12:07.188489 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:12:07.188554 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:12:07.188566 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:12:07.188578 kernel: smpboot: Max logical packages: 1 Mar 14 00:12:07.188590 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:12:07.188602 kernel: devtmpfs: initialized Mar 14 00:12:07.188614 kernel: x86/mm: Memory block size: 128MB Mar 14 00:12:07.188632 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:12:07.188645 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:12:07.188657 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:12:07.188670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:12:07.188682 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:12:07.188695 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:12:07.188707 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:12:07.188720 kernel: audit: type=2000 audit(1773447120.709:1): state=initialized audit_enabled=0 res=1 Mar 14 00:12:07.188731 kernel: cpuidle: using governor menu Mar 14 00:12:07.188748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:12:07.188760 kernel: dca service started, version 1.12.1 Mar 14 00:12:07.188772 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:12:07.188784 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:12:07.188796 kernel: PCI: Using configuration type 1 for base access Mar 14 00:12:07.188808 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:12:07.188820 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:12:07.188832 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:12:07.188844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:12:07.188860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:12:07.188872 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:12:07.188884 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:12:07.188896 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:12:07.188908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:12:07.188993 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:12:07.189005 kernel: ACPI: Interpreter enabled Mar 14 00:12:07.189018 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:12:07.189029 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:12:07.189047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:12:07.189059 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:12:07.189071 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:12:07.189083 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:12:07.190864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:12:07.191414 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:12:07.191795 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:12:07.191829 kernel: PCI host bridge to bus 0000:00 Mar 14 00:12:07.192288 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:12:07.192484 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:12:07.192741 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:12:07.193005 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:12:07.193189 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:07.193364 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:12:07.193623 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:12:07.194378 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:12:07.194737 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:12:07.195046 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:12:07.195296 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:12:07.195493 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:12:07.195831 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:12:07.196215 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:12:07.196417 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:12:07.196809 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:12:07.197204 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:12:07.197579 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:12:07.197798 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:12:07.198142 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:12:07.198346 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:12:07.198668 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:12:07.198879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:12:07.199166 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:12:07.199359 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:12:07.199619 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:12:07.200208 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:12:07.200419 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:12:07.200848 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:12:07.201136 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:12:07.201330 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:12:07.201766 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:12:07.204600 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:12:07.204723 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:12:07.204737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:12:07.204749 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:12:07.204760 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:12:07.204772 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:12:07.204783 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:12:07.204795 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:12:07.204806 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:12:07.204824 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:12:07.204835 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:12:07.204847 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:12:07.204858 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:12:07.204870 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:12:07.204881 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:12:07.204893 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:12:07.204904 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:12:07.205022 kernel: iommu: Default domain type: Translated Mar 14 00:12:07.205044 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:12:07.205055 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:12:07.205067 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:12:07.205079 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:12:07.205090 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:12:07.205309 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:12:07.205570 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:12:07.205790 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:12:07.205809 kernel: vgaarb: loaded Mar 14 00:12:07.205829 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:12:07.205841 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:12:07.205852 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:12:07.205864 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:12:07.205876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:12:07.205887 kernel: pnp: PnP ACPI init Mar 14 00:12:07.206405 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:12:07.206427 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:12:07.206446 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:12:07.206458 kernel: NET: Registered PF_INET protocol family Mar 14 00:12:07.206470 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:12:07.206482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:12:07.206494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:12:07.206571 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:12:07.206583 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:12:07.206595 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:12:07.206606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:07.206624 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:07.206636 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:12:07.206650 kernel: NET: Registered PF_XDP protocol family Mar 14 00:12:07.206853 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:12:07.207125 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:12:07.207305 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:12:07.207478 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:12:07.207732 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:07.208047 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:12:07.208067 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:12:07.208079 kernel: Initialise system trusted keyrings Mar 14 00:12:07.208091 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:12:07.208102 kernel: Key type asymmetric registered Mar 14 00:12:07.208114 kernel: Asymmetric key parser 'x509' registered Mar 14 00:12:07.208125 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:12:07.208137 kernel: io scheduler mq-deadline registered Mar 14 00:12:07.208148 kernel: io scheduler kyber registered Mar 14 00:12:07.208165 kernel: io scheduler bfq registered Mar 14 00:12:07.208177 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:12:07.208190 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:12:07.208202 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:12:07.208213 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:12:07.208225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:12:07.208236 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:12:07.208248 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:12:07.208259 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:12:07.208275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:12:07.208669 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:12:07.208693 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:12:07.208885 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:12:07.209161 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:12:05 UTC (1773447125) Mar 14 00:12:07.209345 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:12:07.209360 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:12:07.209372 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:12:07.209390 kernel: Segment Routing with IPv6 Mar 14 00:12:07.209402 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:12:07.209413 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:12:07.209425 kernel: Key type dns_resolver registered Mar 14 00:12:07.209436 kernel: IPI shorthand broadcast: enabled Mar 14 00:12:07.209448 kernel: sched_clock: Marking stable (4061039245, 925127296)->(5981226460, -995059919) Mar 14 00:12:07.209460 kernel: registered taskstats version 1 Mar 14 00:12:07.209472 kernel: Loading compiled-in X.509 certificates Mar 14 00:12:07.209483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:12:07.209568 kernel: Key type .fscrypt registered Mar 14 00:12:07.209582 kernel: Key type fscrypt-provisioning registered Mar 14 00:12:07.209594 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:12:07.209606 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:12:07.209617 kernel: ima: No architecture policies found Mar 14 00:12:07.209629 kernel: clk: Disabling unused clocks Mar 14 00:12:07.209641 kernel: hrtimer: interrupt took 2696635 ns Mar 14 00:12:07.209654 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:12:07.209666 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:12:07.209686 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:12:07.209698 kernel: Run /init as init process Mar 14 00:12:07.209709 kernel: with arguments: Mar 14 00:12:07.209721 kernel: /init Mar 14 00:12:07.209732 kernel: with environment: Mar 14 00:12:07.209743 kernel: HOME=/ Mar 14 00:12:07.209755 kernel: TERM=linux Mar 14 00:12:07.209769 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:07.209788 systemd[1]: Detected virtualization kvm. Mar 14 00:12:07.209800 systemd[1]: Detected architecture x86-64. Mar 14 00:12:07.209812 systemd[1]: Running in initrd. Mar 14 00:12:07.209823 systemd[1]: No hostname configured, using default hostname. Mar 14 00:12:07.209835 systemd[1]: Hostname set to . Mar 14 00:12:07.209847 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:12:07.209860 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:12:07.209872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:07.209888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:07.209901 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:12:07.210025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:07.210041 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:12:07.210053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:12:07.210068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:12:07.210086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:12:07.210099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:07.210111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:07.210123 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:07.210135 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:07.210166 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:07.210182 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:07.210198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:07.210210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:07.210223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:12:07.210235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:12:07.210248 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:07.210260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:07.210273 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:07.210286 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:07.210302 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:12:07.210314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:07.210327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:12:07.210339 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:12:07.210352 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:07.210364 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:07.210380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:07.210393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:07.210405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:07.210422 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:12:07.210436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:07.210449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:07.210462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:07.210740 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:12:07.210775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:07.210789 systemd-journald[195]: Journal started Mar 14 00:12:07.210819 systemd-journald[195]: Runtime Journal (/run/log/journal/9b3e15c7597049babe3f0ef53eb57646) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:12:07.147251 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:12:07.566246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:12:07.566281 kernel: Bridge firewalling registered Mar 14 00:12:07.283023 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:12:07.582207 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:07.588740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:07.602234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:07.636286 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:07.651357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:07.655845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:07.697718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:07.699193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:07.733234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:07.737602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:07.747312 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:12:07.784384 dracut-cmdline[232]: dracut-dracut-053 Mar 14 00:12:07.789191 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:07.846251 systemd-resolved[230]: Positive Trust Anchors: Mar 14 00:12:07.846307 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:07.846336 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:07.897029 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 14 00:12:07.907439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:07.907819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:08.029302 kernel: SCSI subsystem initialized Mar 14 00:12:08.048228 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:12:08.079296 kernel: iscsi: registered transport (tcp) Mar 14 00:12:08.132227 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:12:08.132492 kernel: QLogic iSCSI HBA Driver Mar 14 00:12:08.350817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:08.380429 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:12:08.467608 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:12:08.467690 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:12:08.468754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:12:08.582444 kernel: raid6: avx2x4 gen() 18162 MB/s Mar 14 00:12:08.600606 kernel: raid6: avx2x2 gen() 16979 MB/s Mar 14 00:12:08.640766 kernel: raid6: avx2x1 gen() 12106 MB/s Mar 14 00:12:08.641369 kernel: raid6: using algorithm avx2x4 gen() 18162 MB/s Mar 14 00:12:08.666435 kernel: raid6: .... xor() 4058 MB/s, rmw enabled Mar 14 00:12:08.666479 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:12:08.698061 kernel: xor: automatically using best checksumming function avx Mar 14 00:12:09.200182 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:12:09.291909 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:09.346444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:09.377285 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 14 00:12:09.385878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:09.407309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:12:09.453059 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 14 00:12:09.595620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:09.633473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:09.852872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:09.883576 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:12:09.954613 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:09.963459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:09.982099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:10.002104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:10.048803 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:12:10.075130 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:12:10.075426 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:12:10.080832 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:10.103272 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:12:10.082857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:10.156263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:12:10.156358 kernel: GPT:9289727 != 19775487 Mar 14 00:12:10.156379 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:12:10.156394 kernel: GPT:9289727 != 19775487 Mar 14 00:12:10.156411 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:12:10.156425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:12:10.164407 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:10.172173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:10.182314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:10.208263 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:10.286662 kernel: libata version 3.00 loaded. Mar 14 00:12:10.293079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:10.311611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:10.380058 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:12:10.380125 kernel: AES CTR mode by8 optimization enabled Mar 14 00:12:10.411611 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:12:10.442772 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:12:10.443049 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:12:10.522398 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:12:10.527750 kernel: scsi host0: ahci Mar 14 00:12:10.528336 kernel: scsi host1: ahci Mar 14 00:12:10.528795 kernel: scsi host2: ahci Mar 14 00:12:10.529161 kernel: scsi host3: ahci Mar 14 00:12:10.529577 kernel: scsi host4: ahci Mar 14 00:12:10.530064 kernel: scsi host5: ahci Mar 14 00:12:10.533104 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:12:10.533127 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:12:10.533143 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:12:10.540670 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:12:10.540803 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:12:10.548052 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:12:10.566115 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Mar 14 00:12:10.588795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:12:10.596382 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:12:10.979053 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Mar 14 00:12:10.979106 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:10.979126 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:12:10.979144 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:10.979161 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:12:10.979176 kernel: ata3.00: applying bridge limits Mar 14 00:12:10.979283 kernel: ata3.00: configured for UDMA/100 Mar 14 00:12:10.979305 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:12:10.980113 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:10.980136 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:10.980152 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:10.996171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:11.017102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:12:11.046397 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:12:11.046851 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:12:11.040502 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:12:11.060469 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:12:11.100507 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:12:11.101293 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:12:11.143712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:11.183306 disk-uuid[569]: Primary Header is updated. Mar 14 00:12:11.183306 disk-uuid[569]: Secondary Entries is updated. Mar 14 00:12:11.183306 disk-uuid[569]: Secondary Header is updated. Mar 14 00:12:11.214021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:12:11.240512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:12:11.271270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:12.279317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:12:12.281373 disk-uuid[570]: The operation has completed successfully. Mar 14 00:12:12.367272 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:12:12.367499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:12:12.424798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:12:12.447280 sh[594]: Success Mar 14 00:12:12.499798 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:12:12.667411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:12:12.682239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:12:12.726384 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:12:12.813290 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:12:12.814399 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:12.814421 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:12:12.818992 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:12:12.823677 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:12:12.862901 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:12:12.875255 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:12:12.901787 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:12:12.946488 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:12:12.986273 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:12.986601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:13.005582 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:12:13.055278 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:12:13.088085 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:12:13.098609 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:13.152721 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:12:13.188330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:12:13.547168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:13.577396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:13.629723 ignition[696]: Ignition 2.19.0 Mar 14 00:12:13.630143 ignition[696]: Stage: fetch-offline Mar 14 00:12:13.631257 ignition[696]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:13.631276 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:13.631671 ignition[696]: parsed url from cmdline: "" Mar 14 00:12:13.631678 ignition[696]: no config URL provided Mar 14 00:12:13.631687 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:13.631703 ignition[696]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:13.695864 systemd-networkd[781]: lo: Link UP Mar 14 00:12:13.632087 ignition[696]: op(1): [started] loading QEMU firmware config module Mar 14 00:12:13.695872 systemd-networkd[781]: lo: Gained carrier Mar 14 00:12:13.632096 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:12:13.700415 systemd-networkd[781]: Enumeration completed Mar 14 00:12:13.691411 ignition[696]: op(1): [finished] loading QEMU firmware config module Mar 14 00:12:13.704526 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:13.706840 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:13.706849 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:13.715111 systemd[1]: Reached target network.target - Network. Mar 14 00:12:13.737484 systemd-networkd[781]: eth0: Link UP Mar 14 00:12:13.737493 systemd-networkd[781]: eth0: Gained carrier Mar 14 00:12:13.737512 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:13.782167 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:12:13.937258 ignition[696]: parsing config with SHA512: 8b9f59d0edeb64e65c449d3d2dc55d73b8810fe46b45db395526d0e358ef887438acb200e866003af2c92312ea7f0060a48f97ff23ae568831006e22e91b5977 Mar 14 00:12:13.968911 unknown[696]: fetched base config from "system" Mar 14 00:12:13.969041 unknown[696]: fetched user config from "qemu" Mar 14 00:12:13.972695 ignition[696]: fetch-offline: fetch-offline passed Mar 14 00:12:13.973047 ignition[696]: Ignition finished successfully Mar 14 00:12:13.994527 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:14.007332 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:12:14.033400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:12:14.155208 ignition[787]: Ignition 2.19.0 Mar 14 00:12:14.155264 ignition[787]: Stage: kargs Mar 14 00:12:14.155778 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:14.155797 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:14.157604 ignition[787]: kargs: kargs passed Mar 14 00:12:14.157677 ignition[787]: Ignition finished successfully Mar 14 00:12:14.187106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:12:14.211363 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:12:14.288248 ignition[795]: Ignition 2.19.0 Mar 14 00:12:14.288328 ignition[795]: Stage: disks Mar 14 00:12:14.288696 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:14.288718 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:14.308442 ignition[795]: disks: disks passed Mar 14 00:12:14.308671 ignition[795]: Ignition finished successfully Mar 14 00:12:14.321367 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:12:14.328661 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:14.347617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:12:14.365692 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:14.379321 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:14.395888 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:14.433699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:12:14.498908 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:12:14.506674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:12:14.531264 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:12:14.767172 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:12:14.768413 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:12:14.773088 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:14.798159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:14.804068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:12:14.846857 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 14 00:12:14.846893 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:14.846906 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:14.846987 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:12:14.812102 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:12:14.862814 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:12:14.812176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:12:14.812206 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:14.848684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:12:14.865063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:14.900318 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:12:15.013636 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:12:15.024300 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:12:15.038152 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:12:15.046285 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:12:15.221522 systemd-networkd[781]: eth0: Gained IPv6LL Mar 14 00:12:15.328347 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:15.348237 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:12:15.350065 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:12:15.390896 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:12:15.399492 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:15.460538 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:12:15.956414 ignition[927]: INFO : Ignition 2.19.0 Mar 14 00:12:15.956414 ignition[927]: INFO : Stage: mount Mar 14 00:12:15.964885 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:15.964885 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:15.964885 ignition[927]: INFO : mount: mount passed Mar 14 00:12:15.964885 ignition[927]: INFO : Ignition finished successfully Mar 14 00:12:15.980468 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:12:16.007300 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:12:16.032328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:16.159780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 14 00:12:16.172605 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:16.172673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:16.172685 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:12:16.189087 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:12:16.192522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:16.349950 ignition[957]: INFO : Ignition 2.19.0 Mar 14 00:12:16.349950 ignition[957]: INFO : Stage: files Mar 14 00:12:16.366265 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:16.366265 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:16.366265 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:12:16.366265 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:12:16.366265 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:12:16.406525 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:12:16.406525 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:12:16.406525 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:12:16.406525 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:16.406525 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:12:16.379051 unknown[957]: wrote ssh authorized keys file for user: core Mar 14 00:12:16.849729 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:12:17.271487 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:17.271487 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:17.292688 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:12:17.673654 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:12:19.891488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:19.891488 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:12:19.914801 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:12:20.190855 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:12:20.203870 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:12:20.204208 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:12:20.204208 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:20.204208 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:20.230697 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:20.230697 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:20.230697 ignition[957]: INFO : files: files passed Mar 14 00:12:20.230697 ignition[957]: INFO : Ignition finished successfully Mar 14 00:12:20.265645 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:12:20.295312 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:12:20.308463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:12:20.332544 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:12:20.338164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:12:20.352704 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:12:20.361312 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:20.361312 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:20.379393 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:20.393419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:20.421134 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:12:20.452860 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:12:20.561096 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:12:20.570492 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:12:20.587103 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:12:20.602075 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:12:20.612158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:12:20.639133 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:12:20.883837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:20.960169 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:12:20.996713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:21.008519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:21.053403 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:12:21.065230 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:12:21.065465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:21.096348 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:12:21.111014 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:12:21.111408 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:12:21.147391 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:21.171347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:21.171746 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:12:21.193378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:21.234203 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:12:21.279262 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:12:21.305291 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:12:21.326352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:12:21.339006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:21.361317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:21.377143 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:21.416206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:12:21.436713 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:21.468692 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:12:21.478551 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:21.528443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:12:21.529049 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:21.573315 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:12:21.583293 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:12:21.594866 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:21.651396 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:12:21.685068 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:12:21.705842 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:12:21.706385 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:21.734379 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:12:21.734703 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:21.760046 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:12:21.760567 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:21.770081 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:12:21.771216 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:12:21.808791 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:12:21.851797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:12:21.865286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:12:21.866434 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:21.880068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:12:21.906847 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:22.009225 ignition[1011]: INFO : Ignition 2.19.0 Mar 14 00:12:22.009225 ignition[1011]: INFO : Stage: umount Mar 14 00:12:22.045656 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:22.045656 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:12:22.045656 ignition[1011]: INFO : umount: umount passed Mar 14 00:12:22.045656 ignition[1011]: INFO : Ignition finished successfully Mar 14 00:12:22.040729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:12:22.041263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:12:22.143665 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:12:22.144132 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:12:22.168323 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:12:22.174873 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:12:22.175235 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:12:22.200479 systemd[1]: Stopped target network.target - Network. Mar 14 00:12:22.206304 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:12:22.211288 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:12:22.248235 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:12:22.250759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:12:22.255456 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:12:22.255552 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:12:22.287169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:12:22.287635 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:22.302192 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:12:22.303079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:22.344660 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:12:22.401296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:12:22.417854 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 14 00:12:22.474324 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:12:22.475198 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:12:22.512763 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:12:22.539734 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:12:22.571744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:12:22.571895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:22.633136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:12:22.648395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:12:22.648535 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:22.680022 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:12:22.690157 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:22.708493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:12:22.708799 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:22.717330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:12:22.717431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:22.813062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:22.863504 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:12:22.873470 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:22.897775 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:12:22.898122 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:12:22.921278 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:12:22.921496 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:22.936343 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:12:22.936434 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:22.958272 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:12:22.958410 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:22.978998 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:12:22.979126 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:23.006190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:23.006323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:23.062416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:12:23.095275 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:12:23.098634 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:23.143492 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:12:23.146209 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:23.209002 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:12:23.209096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:23.230054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:23.230187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:23.237868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:12:23.238820 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:12:23.251397 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:12:23.301496 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:12:23.331809 systemd[1]: Switching root. Mar 14 00:12:23.416549 systemd-journald[195]: Journal stopped Mar 14 00:12:29.759757 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:12:29.763564 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:12:29.763827 kernel: SELinux: policy capability open_perms=1 Mar 14 00:12:29.763856 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:12:29.774792 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:12:29.774895 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:12:29.775166 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:12:29.775241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:12:29.775261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:12:29.775291 kernel: audit: type=1403 audit(1773447143.833:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:12:29.775435 systemd[1]: Successfully loaded SELinux policy in 138.286ms. Mar 14 00:12:29.775750 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 50.278ms. Mar 14 00:12:29.775780 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:29.775798 systemd[1]: Detected virtualization kvm. Mar 14 00:12:29.775880 systemd[1]: Detected architecture x86-64. Mar 14 00:12:29.776074 systemd[1]: Detected first boot. Mar 14 00:12:29.776150 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:12:29.776169 zram_generator::config[1056]: No configuration found. Mar 14 00:12:29.776190 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:12:29.776260 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:12:29.776327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:12:29.776346 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:12:29.776413 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:12:29.776433 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:12:29.776450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:12:29.776511 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:12:29.776531 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:12:29.776702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:12:29.776728 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:12:29.776794 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:12:29.776882 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:29.777081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:29.777103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:12:29.777122 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:12:29.777142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:12:29.777268 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:29.777289 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:12:29.777309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:29.777327 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:12:29.777346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:12:29.777373 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:29.777390 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:12:29.777410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:29.777487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:29.777562 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:29.777686 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:29.777756 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:12:29.777827 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:12:29.777850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:29.777867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:29.777885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:29.777903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:12:29.778024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:12:29.778045 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:12:29.778066 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:12:29.778083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:29.778152 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:12:29.778259 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:12:29.778278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:12:29.778299 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:12:29.778317 systemd[1]: Reached target machines.target - Containers. Mar 14 00:12:29.778335 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:12:29.778352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:29.778369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:29.778443 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:12:29.778555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:29.778579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:29.778601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:29.778681 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:12:29.778708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:29.778726 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:12:29.778795 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:12:29.778815 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:12:29.778890 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:12:29.778911 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:12:29.779030 kernel: fuse: init (API version 7.39) Mar 14 00:12:29.779051 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:29.779068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:29.779089 kernel: ACPI: bus type drm_connector registered Mar 14 00:12:29.779105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:12:29.779124 kernel: loop: module loaded Mar 14 00:12:29.779141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:12:29.779221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:29.779288 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:12:29.779362 systemd[1]: Stopped verity-setup.service. Mar 14 00:12:29.779385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:29.779563 systemd-journald[1140]: Collecting audit messages is disabled. Mar 14 00:12:29.779800 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:12:29.779825 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:12:29.779909 systemd-journald[1140]: Journal started Mar 14 00:12:29.780118 systemd-journald[1140]: Runtime Journal (/run/log/journal/9b3e15c7597049babe3f0ef53eb57646) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:12:27.822464 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:12:27.905076 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:12:27.906991 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:12:27.908191 systemd[1]: systemd-journald.service: Consumed 2.624s CPU time. Mar 14 00:12:29.804127 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:29.805460 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:12:29.812236 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:12:29.827298 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:12:29.835778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:12:29.842832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:12:29.851484 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:29.860342 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:12:29.860798 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:12:29.870577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:29.871088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:29.886425 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:29.887190 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:29.896524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:29.898123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:29.909178 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:12:29.909513 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:12:29.928828 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:29.936799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:29.947191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:29.959125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:12:29.967075 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:12:29.991224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:30.013302 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:12:30.040299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:12:30.055750 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:12:30.065122 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:12:30.065231 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:30.075247 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:12:30.087134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:12:30.099787 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:12:30.107021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:30.110855 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:12:30.130850 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:12:30.137712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:30.143258 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:12:30.150514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:30.153084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:30.161569 systemd-journald[1140]: Time spent on flushing to /var/log/journal/9b3e15c7597049babe3f0ef53eb57646 is 109.389ms for 943 entries. Mar 14 00:12:30.161569 systemd-journald[1140]: System Journal (/var/log/journal/9b3e15c7597049babe3f0ef53eb57646) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:12:30.338835 systemd-journald[1140]: Received client request to flush runtime journal. Mar 14 00:12:30.339009 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:12:30.164296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:12:30.178392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:30.198292 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:12:30.209702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:12:30.220612 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:12:30.228723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:12:30.239878 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:12:30.286712 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:12:30.311334 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:12:30.350217 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:12:30.385852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:30.487091 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:12:30.489278 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 14 00:12:30.489360 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 14 00:12:30.492501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:12:30.494040 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:12:30.503486 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:12:30.509887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:30.533507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:12:30.572138 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 00:12:30.703042 kernel: loop2: detected capacity change from 0 to 217752 Mar 14 00:12:30.845276 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:12:30.885866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:30.945208 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:12:31.051136 kernel: loop4: detected capacity change from 0 to 142488 Mar 14 00:12:31.077517 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 14 00:12:31.077577 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 14 00:12:31.097741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:31.182858 kernel: loop5: detected capacity change from 0 to 217752 Mar 14 00:12:31.241830 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:12:31.243702 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 14 00:12:31.260761 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:12:31.260793 systemd[1]: Reloading... Mar 14 00:12:31.658039 zram_generator::config[1223]: No configuration found. Mar 14 00:12:31.999496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:32.118621 systemd[1]: Reloading finished in 847 ms. Mar 14 00:12:32.264281 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:12:32.304218 systemd[1]: Starting ensure-sysext.service... Mar 14 00:12:32.374110 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:32.391350 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:12:32.439140 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:12:32.391377 systemd[1]: Reloading... Mar 14 00:12:32.688895 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:12:32.689791 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:12:32.691699 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:12:32.694166 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 14 00:12:32.694369 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 14 00:12:32.705224 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:32.705283 systemd-tmpfiles[1260]: Skipping /boot Mar 14 00:12:32.722162 zram_generator::config[1289]: No configuration found. Mar 14 00:12:32.752763 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:32.752777 systemd-tmpfiles[1260]: Skipping /boot Mar 14 00:12:32.990729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:33.084578 systemd[1]: Reloading finished in 692 ms. Mar 14 00:12:33.286222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:12:33.295745 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:12:33.329264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:33.462253 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:33.490321 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:12:33.510380 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:12:33.555603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:33.565246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:33.579638 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:12:33.591240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:33.591752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:33.595087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:33.613377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:33.640416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:33.645750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:33.652342 augenrules[1351]: No rules Mar 14 00:12:33.652386 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:12:33.654264 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Mar 14 00:12:33.658431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:33.666515 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:33.681281 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:12:33.695430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:33.695877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:33.710449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:33.710849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:33.747417 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:33.747816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:33.787298 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:33.787627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:33.845823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:33.948414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:34.003431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:34.009167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:34.012182 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:12:34.042213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:34.044410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:34.060318 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:12:34.073533 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:12:34.087418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:12:34.096222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:34.096579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:34.105159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:34.105473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:34.135140 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:34.135416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:34.143447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:12:34.199802 systemd[1]: Finished ensure-sysext.service. Mar 14 00:12:34.246472 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:12:34.246788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:34.247216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:34.257289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:34.271330 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:34.286200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:34.293882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:34.299119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:34.311429 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:34.325333 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:12:34.347105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:12:34.347190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:34.438126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:34.478717 systemd-resolved[1337]: Positive Trust Anchors: Mar 14 00:12:34.479184 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:34.479233 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:34.482824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:34.495588 systemd-resolved[1337]: Defaulting to hostname 'linux'. Mar 14 00:12:34.505306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:34.510090 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1373) Mar 14 00:12:34.516847 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:34.518386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:34.537738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:34.540171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:34.548626 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:34.549222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:34.630238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:34.641052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:34.641267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:34.695414 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:12:34.705050 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:12:34.739534 systemd-networkd[1407]: lo: Link UP Mar 14 00:12:34.739593 systemd-networkd[1407]: lo: Gained carrier Mar 14 00:12:34.742193 systemd-networkd[1407]: Enumeration completed Mar 14 00:12:34.742324 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:34.744149 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:34.744196 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:34.747262 systemd-networkd[1407]: eth0: Link UP Mar 14 00:12:34.747275 systemd-networkd[1407]: eth0: Gained carrier Mar 14 00:12:34.747295 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:34.750446 systemd[1]: Reached target network.target - Network. Mar 14 00:12:34.765410 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:12:34.774088 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:12:34.842362 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:12:34.877092 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:12:35.512937 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:12:35.513019 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2026-03-14 00:12:35.512606 UTC. Mar 14 00:12:35.516040 systemd-resolved[1337]: Clock change detected. Flushing caches. Mar 14 00:12:35.567516 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:12:35.567724 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:12:35.568955 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:12:35.576039 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:12:35.628279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:35.654086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:12:35.683995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:12:35.708417 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:12:35.802786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:12:36.121334 kernel: kvm_amd: TSC scaling supported Mar 14 00:12:36.121472 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:12:36.121490 kernel: kvm_amd: Nested Paging enabled Mar 14 00:12:36.121589 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:12:36.121635 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:12:36.249387 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:12:36.428118 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:12:36.700327 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:12:36.781611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:36.870938 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:36.934539 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:12:36.956704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:36.962524 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:36.967685 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:12:36.974066 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:12:37.020939 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:12:37.027411 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:12:37.034522 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:12:37.058424 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:12:37.058545 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:37.063578 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:37.071345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:12:37.079954 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:12:37.087604 systemd-networkd[1407]: eth0: Gained IPv6LL Mar 14 00:12:37.096511 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:12:37.104980 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:12:37.112947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:12:37.119771 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:12:37.127288 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:12:37.133512 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:37.133542 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:37.154812 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:37.160410 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:37.160464 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:37.171598 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:12:37.184308 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:12:37.255373 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:12:37.289534 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:12:37.307836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:12:37.327614 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:12:37.366410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:37.379362 jq[1440]: false Mar 14 00:12:37.379819 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:12:37.389727 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:12:37.394927 dbus-daemon[1439]: [system] SELinux support is enabled Mar 14 00:12:37.403647 extend-filesystems[1441]: Found loop3 Mar 14 00:12:37.403647 extend-filesystems[1441]: Found loop4 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found loop5 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found sr0 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda1 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda2 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda3 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found usr Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda4 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda6 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda7 Mar 14 00:12:37.445823 extend-filesystems[1441]: Found vda9 Mar 14 00:12:37.445823 extend-filesystems[1441]: Checking size of /dev/vda9 Mar 14 00:12:37.445823 extend-filesystems[1441]: Resized partition /dev/vda9 Mar 14 00:12:37.688538 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:12:37.688591 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Mar 14 00:12:37.688608 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:12:37.406386 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:12:37.688800 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:12:37.688800 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:12:37.688800 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:12:37.688800 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:12:37.422964 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:12:37.751597 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Mar 14 00:12:37.450747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:12:37.519599 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:12:37.528544 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:12:37.529614 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:12:37.566196 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:12:37.755600 update_engine[1468]: I20260314 00:12:37.649653 1468 main.cc:92] Flatcar Update Engine starting Mar 14 00:12:37.755600 update_engine[1468]: I20260314 00:12:37.652830 1468 update_check_scheduler.cc:74] Next update check in 5m56s Mar 14 00:12:37.590533 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:12:37.756252 jq[1470]: true Mar 14 00:12:37.598675 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:12:37.612354 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:12:37.650632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:12:37.651067 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:12:37.651717 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:12:37.652307 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:12:37.671024 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:12:37.671050 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:12:37.673034 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:12:37.673427 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:12:37.675557 systemd-logind[1465]: New seat seat0. Mar 14 00:12:37.686858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:12:37.715120 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:12:37.738593 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:12:37.738995 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:12:37.802345 jq[1478]: true Mar 14 00:12:37.808467 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:12:37.825457 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:12:37.831487 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:12:37.832053 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:12:37.924434 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:12:37.929223 tar[1476]: linux-amd64/LICENSE Mar 14 00:12:37.929223 tar[1476]: linux-amd64/helm Mar 14 00:12:37.970284 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:12:37.992365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:12:37.993345 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:12:37.993571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:12:38.011488 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:12:38.012397 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:12:38.060086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:12:38.273379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:12:38.383594 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:12:38.487692 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:12:38.488104 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:12:38.574622 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:12:39.420828 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:12:39.428979 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:12:39.438871 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:12:39.453520 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:12:40.022515 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:12:40.297826 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:12:40.378760 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:12:40.386881 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:12:41.786403 containerd[1479]: time="2026-03-14T00:12:41.784873105Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:12:42.021682 containerd[1479]: time="2026-03-14T00:12:42.020429128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.028819 containerd[1479]: time="2026-03-14T00:12:42.028652741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:42.028819 containerd[1479]: time="2026-03-14T00:12:42.028787513Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:12:42.029055 containerd[1479]: time="2026-03-14T00:12:42.028963822Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:12:42.029771 containerd[1479]: time="2026-03-14T00:12:42.029739881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:12:42.030116 containerd[1479]: time="2026-03-14T00:12:42.029989217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.030698 containerd[1479]: time="2026-03-14T00:12:42.030667403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:42.030863 containerd[1479]: time="2026-03-14T00:12:42.030838933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.032428 containerd[1479]: time="2026-03-14T00:12:42.032404686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:42.032496 containerd[1479]: time="2026-03-14T00:12:42.032478914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.032602 containerd[1479]: time="2026-03-14T00:12:42.032580815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:42.032656 containerd[1479]: time="2026-03-14T00:12:42.032643562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.032969 containerd[1479]: time="2026-03-14T00:12:42.032894340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.034457 containerd[1479]: time="2026-03-14T00:12:42.034431741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:42.035432 containerd[1479]: time="2026-03-14T00:12:42.035403845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:42.035511 containerd[1479]: time="2026-03-14T00:12:42.035489004Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:12:42.035707 containerd[1479]: time="2026-03-14T00:12:42.035687575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:12:42.035824 containerd[1479]: time="2026-03-14T00:12:42.035807368Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073011619Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073246437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073277776Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073302402Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073321858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:12:42.073782 containerd[1479]: time="2026-03-14T00:12:42.073578026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:12:42.074369 containerd[1479]: time="2026-03-14T00:12:42.074274687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074479480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074512731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074536886Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074557906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074582472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074601588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074626023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074651721Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.074679 containerd[1479]: time="2026-03-14T00:12:42.074672850Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074690674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074713225Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074753060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074775883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074794928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074817971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074837728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074856853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074877292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074971919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.074998839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.075032141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.075052459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.075295 containerd[1479]: time="2026-03-14T00:12:42.075082766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.076244 containerd[1479]: time="2026-03-14T00:12:42.075105508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.076244 containerd[1479]: time="2026-03-14T00:12:42.075800596Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:12:42.076244 containerd[1479]: time="2026-03-14T00:12:42.076007031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.076244 containerd[1479]: time="2026-03-14T00:12:42.076108861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.076244 containerd[1479]: time="2026-03-14T00:12:42.076238844Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:12:42.076699 containerd[1479]: time="2026-03-14T00:12:42.076547520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:12:42.076744 containerd[1479]: time="2026-03-14T00:12:42.076576063Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:12:42.076744 containerd[1479]: time="2026-03-14T00:12:42.076711887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:12:42.077058 containerd[1479]: time="2026-03-14T00:12:42.076728508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:12:42.077105 containerd[1479]: time="2026-03-14T00:12:42.076866055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.077296 containerd[1479]: time="2026-03-14T00:12:42.077102787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:12:42.077296 containerd[1479]: time="2026-03-14T00:12:42.077285107Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:12:42.077601 containerd[1479]: time="2026-03-14T00:12:42.077450947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:12:42.079769 containerd[1479]: time="2026-03-14T00:12:42.078756705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:12:42.079769 containerd[1479]: time="2026-03-14T00:12:42.078999858Z" level=info msg="Connect containerd service" Mar 14 00:12:42.079769 containerd[1479]: time="2026-03-14T00:12:42.079058418Z" level=info msg="using legacy CRI server" Mar 14 00:12:42.079769 containerd[1479]: time="2026-03-14T00:12:42.079080910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:12:42.079769 containerd[1479]: time="2026-03-14T00:12:42.079638210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.089540928Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.089810491Z" level=info msg="Start subscribing containerd event" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.089869391Z" level=info msg="Start recovering state" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.090027356Z" level=info msg="Start event monitor" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.090058424Z" level=info msg="Start snapshots syncer" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.090071268Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:12:42.090642 containerd[1479]: time="2026-03-14T00:12:42.090084011Z" level=info msg="Start streaming server" Mar 14 00:12:42.094334 containerd[1479]: time="2026-03-14T00:12:42.094084959Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:12:42.094392 containerd[1479]: time="2026-03-14T00:12:42.094358861Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:12:42.094629 containerd[1479]: time="2026-03-14T00:12:42.094510754Z" level=info msg="containerd successfully booted in 0.333678s" Mar 14 00:12:42.095278 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:12:43.164363 tar[1476]: linux-amd64/README.md Mar 14 00:12:43.220359 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:12:46.676424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:12:46.688497 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:50910.service - OpenSSH per-connection server daemon (10.0.0.1:50910). Mar 14 00:12:46.881830 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 50910 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:12:46.890298 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:47.001366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:47.008743 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:47.016114 systemd-logind[1465]: New session 1 of user core. Mar 14 00:12:47.018528 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:12:47.018617 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:12:47.021375 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:12:47.186368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:12:47.205245 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:12:47.234037 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:12:47.565505 systemd[1561]: Queued start job for default target default.target. Mar 14 00:12:47.578668 systemd[1561]: Created slice app.slice - User Application Slice. Mar 14 00:12:47.578708 systemd[1561]: Reached target paths.target - Paths. Mar 14 00:12:47.578729 systemd[1561]: Reached target timers.target - Timers. Mar 14 00:12:47.584739 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:12:48.009087 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:12:48.114290 systemd[1561]: Reached target sockets.target - Sockets. Mar 14 00:12:48.114477 systemd[1561]: Reached target basic.target - Basic System. Mar 14 00:12:48.114891 systemd[1561]: Reached target default.target - Main User Target. Mar 14 00:12:48.115048 systemd[1561]: Startup finished in 853ms. Mar 14 00:12:48.133253 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:12:48.320039 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:12:48.321425 systemd[1]: Startup finished in 4.470s (kernel) + 17.553s (initrd) + 23.995s (userspace) = 46.019s. Mar 14 00:12:48.440792 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:50914.service - OpenSSH per-connection server daemon (10.0.0.1:50914). Mar 14 00:12:48.571255 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 50914 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:12:48.575563 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:48.590447 systemd-logind[1465]: New session 2 of user core. Mar 14 00:12:48.627744 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:12:49.215246 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:49.252929 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:50914.service: Deactivated successfully. Mar 14 00:12:49.257008 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:12:49.261112 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:12:49.274843 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:50930.service - OpenSSH per-connection server daemon (10.0.0.1:50930). Mar 14 00:12:49.277830 systemd-logind[1465]: Removed session 2. Mar 14 00:12:49.354601 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 50930 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:12:49.359549 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:49.373640 systemd-logind[1465]: New session 3 of user core. Mar 14 00:12:49.380542 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:12:49.457999 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:49.472426 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:50930.service: Deactivated successfully. Mar 14 00:12:49.475072 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:12:49.479704 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:12:49.485864 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:50946.service - OpenSSH per-connection server daemon (10.0.0.1:50946). Mar 14 00:12:49.489055 systemd-logind[1465]: Removed session 3. Mar 14 00:12:49.551709 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 50946 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:12:49.556767 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:49.570246 systemd-logind[1465]: New session 4 of user core. Mar 14 00:12:49.584746 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:12:51.174439 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:51.192632 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:50946.service: Deactivated successfully. Mar 14 00:12:51.197375 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:12:51.197668 systemd[1]: session-4.scope: Consumed 1.508s CPU time. Mar 14 00:12:51.203412 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:12:51.212880 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:57466.service - OpenSSH per-connection server daemon (10.0.0.1:57466). Mar 14 00:12:51.217492 systemd-logind[1465]: Removed session 4. Mar 14 00:12:51.728364 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 57466 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:12:51.919577 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:52.227373 systemd-logind[1465]: New session 5 of user core. Mar 14 00:12:52.255937 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:12:52.318566 kubelet[1557]: E0314 00:12:52.317454 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:52.327755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:52.328033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:52.328826 systemd[1]: kubelet.service: Consumed 11.498s CPU time. Mar 14 00:12:52.414922 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:12:52.416646 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:00.815464 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:13:00.822277 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:13:02.580424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:02.678324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:07.492890 dockerd[1625]: time="2026-03-14T00:13:07.491967442Z" level=info msg="Starting up" Mar 14 00:13:08.511225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:08.514982 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:10.216920 kubelet[1645]: E0314 00:13:10.215580 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:10.231286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:10.232086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:10.233747 systemd[1]: kubelet.service: Consumed 6.623s CPU time. Mar 14 00:13:10.413993 systemd[1]: var-lib-docker-metacopy\x2dcheck2160571576-merged.mount: Deactivated successfully. Mar 14 00:13:10.560041 dockerd[1625]: time="2026-03-14T00:13:10.556875907Z" level=info msg="Loading containers: start." Mar 14 00:13:11.660615 kernel: Initializing XFRM netlink socket Mar 14 00:13:12.321525 systemd-networkd[1407]: docker0: Link UP Mar 14 00:13:12.463559 dockerd[1625]: time="2026-03-14T00:13:12.462924698Z" level=info msg="Loading containers: done." Mar 14 00:13:12.696452 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1560932494-merged.mount: Deactivated successfully. Mar 14 00:13:12.791424 dockerd[1625]: time="2026-03-14T00:13:12.788956842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:13:12.807841 dockerd[1625]: time="2026-03-14T00:13:12.806570465Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:13:12.810276 dockerd[1625]: time="2026-03-14T00:13:12.808333505Z" level=info msg="Daemon has completed initialization" Mar 14 00:13:13.160038 dockerd[1625]: time="2026-03-14T00:13:13.154091026Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:13:13.161358 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:13:19.594691 containerd[1479]: time="2026-03-14T00:13:19.584004835Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:13:20.265955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:13:20.348971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:21.320732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:21.330827 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:21.625300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801489818.mount: Deactivated successfully. Mar 14 00:13:21.691273 kubelet[1797]: E0314 00:13:21.690997 1797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:21.697726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:21.698089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:21.698877 systemd[1]: kubelet.service: Consumed 1.047s CPU time. Mar 14 00:13:23.317015 update_engine[1468]: I20260314 00:13:23.316210 1468 update_attempter.cc:509] Updating boot flags... Mar 14 00:13:23.511584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1871) Mar 14 00:13:28.709097 containerd[1479]: time="2026-03-14T00:13:28.707601268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:28.712310 containerd[1479]: time="2026-03-14T00:13:28.710113994Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:13:28.714263 containerd[1479]: time="2026-03-14T00:13:28.713977260Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:28.724008 containerd[1479]: time="2026-03-14T00:13:28.723798472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:28.726378 containerd[1479]: time="2026-03-14T00:13:28.725606049Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 9.131665003s" Mar 14 00:13:28.726569 containerd[1479]: time="2026-03-14T00:13:28.726446181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:13:28.735495 containerd[1479]: time="2026-03-14T00:13:28.735454423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:13:32.068779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:13:32.989549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:34.898006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:34.925681 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:35.482372 kubelet[1889]: E0314 00:13:35.481588 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:35.495607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:35.497805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:35.498737 systemd[1]: kubelet.service: Consumed 1.754s CPU time. Mar 14 00:13:37.886361 containerd[1479]: time="2026-03-14T00:13:37.885228488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.891806 containerd[1479]: time="2026-03-14T00:13:37.891744922Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:13:37.897244 containerd[1479]: time="2026-03-14T00:13:37.895481694Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.906263 containerd[1479]: time="2026-03-14T00:13:37.905970753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.909410 containerd[1479]: time="2026-03-14T00:13:37.909319904Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 9.173659318s" Mar 14 00:13:37.909410 containerd[1479]: time="2026-03-14T00:13:37.909396167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:13:37.921461 containerd[1479]: time="2026-03-14T00:13:37.920952681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:13:45.622943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:13:45.702720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:49.882930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:49.999844 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:52.675710 containerd[1479]: time="2026-03-14T00:13:52.636576016Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:13:52.727711 containerd[1479]: time="2026-03-14T00:13:52.710236340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.890076 kubelet[1910]: E0314 00:13:52.888456 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:52.905900 containerd[1479]: time="2026-03-14T00:13:52.905746800Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.910027 containerd[1479]: time="2026-03-14T00:13:52.909678891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.910930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:52.911506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:52.912606 systemd[1]: kubelet.service: Consumed 4.375s CPU time. Mar 14 00:13:52.969953 containerd[1479]: time="2026-03-14T00:13:52.929051087Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 15.007745111s" Mar 14 00:13:52.969953 containerd[1479]: time="2026-03-14T00:13:52.969355062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:13:53.048930 containerd[1479]: time="2026-03-14T00:13:53.038048687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:14:03.019597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:14:03.066114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:05.203058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374541301.mount: Deactivated successfully. Mar 14 00:14:05.681888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:05.720457 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:06.278253 kubelet[1931]: E0314 00:14:06.275111 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:06.283649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:06.284022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:06.286064 systemd[1]: kubelet.service: Consumed 2.071s CPU time. Mar 14 00:14:09.076352 containerd[1479]: time="2026-03-14T00:14:09.074333281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:09.076352 containerd[1479]: time="2026-03-14T00:14:09.074559583Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:14:09.081504 containerd[1479]: time="2026-03-14T00:14:09.080086639Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:09.090663 containerd[1479]: time="2026-03-14T00:14:09.090542764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:09.094668 containerd[1479]: time="2026-03-14T00:14:09.094554884Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 16.048551029s" Mar 14 00:14:09.094668 containerd[1479]: time="2026-03-14T00:14:09.094646847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:14:09.100925 containerd[1479]: time="2026-03-14T00:14:09.100007198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:14:10.213764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805932653.mount: Deactivated successfully. Mar 14 00:14:16.515743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:14:16.562653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:18.464587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:18.514327 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:19.289913 kubelet[2008]: E0314 00:14:19.288601 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:19.307782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:19.308106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:19.310093 systemd[1]: kubelet.service: Consumed 1.839s CPU time. Mar 14 00:14:25.079845 containerd[1479]: time="2026-03-14T00:14:25.079403834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.087576 containerd[1479]: time="2026-03-14T00:14:25.087509643Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:14:25.098191 containerd[1479]: time="2026-03-14T00:14:25.094642149Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.105678 containerd[1479]: time="2026-03-14T00:14:25.103271095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.105678 containerd[1479]: time="2026-03-14T00:14:25.105116670Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 16.004871787s" Mar 14 00:14:25.105678 containerd[1479]: time="2026-03-14T00:14:25.105266745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:14:25.111839 containerd[1479]: time="2026-03-14T00:14:25.111793427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:14:26.889329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406508568.mount: Deactivated successfully. Mar 14 00:14:27.113398 containerd[1479]: time="2026-03-14T00:14:27.095612300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.113398 containerd[1479]: time="2026-03-14T00:14:27.112849701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:14:27.123916 containerd[1479]: time="2026-03-14T00:14:27.119010398Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.173687 containerd[1479]: time="2026-03-14T00:14:27.171586418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.181596 containerd[1479]: time="2026-03-14T00:14:27.181275857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.069057078s" Mar 14 00:14:27.181596 containerd[1479]: time="2026-03-14T00:14:27.181341803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:14:27.190722 containerd[1479]: time="2026-03-14T00:14:27.190625784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:14:28.426720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284597353.mount: Deactivated successfully. Mar 14 00:14:29.667478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 14 00:14:29.791693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:31.144744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:31.166400 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:31.312317 kubelet[2043]: E0314 00:14:31.311886 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:31.317596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:31.317931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:31.318813 systemd[1]: kubelet.service: Consumed 1.028s CPU time. Mar 14 00:14:37.173435 containerd[1479]: time="2026-03-14T00:14:37.172814599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:37.175113 containerd[1479]: time="2026-03-14T00:14:37.174515128Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:14:37.177441 containerd[1479]: time="2026-03-14T00:14:37.177347103Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:37.187269 containerd[1479]: time="2026-03-14T00:14:37.186721751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:37.188679 containerd[1479]: time="2026-03-14T00:14:37.188624768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 9.997744584s" Mar 14 00:14:37.189239 containerd[1479]: time="2026-03-14T00:14:37.188808747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:14:41.578327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 14 00:14:41.655234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:44.056726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:44.105906 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:44.864106 kubelet[2133]: E0314 00:14:44.862615 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:44.984810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:44.985702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:44.996907 systemd[1]: kubelet.service: Consumed 1.957s CPU time, 11.1M memory peak, 0B memory swap peak. Mar 14 00:14:48.685509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:48.685902 systemd[1]: kubelet.service: Consumed 1.957s CPU time, 11.1M memory peak, 0B memory swap peak. Mar 14 00:14:48.710442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:48.860809 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-5.scope)... Mar 14 00:14:48.860838 systemd[1]: Reloading... Mar 14 00:14:49.511111 zram_generator::config[2190]: No configuration found. Mar 14 00:14:53.722393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:54.207399 systemd[1]: Reloading finished in 5345 ms. Mar 14 00:14:54.562519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:54.572876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:54.582528 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:14:54.583317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:54.583383 systemd[1]: kubelet.service: Consumed 1.732s CPU time, 24.7M memory peak, 0B memory swap peak. Mar 14 00:14:54.620495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:55.918318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:55.948705 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:56.730977 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:57.089552 kubelet[2237]: I0314 00:14:57.086450 2237 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:14:57.089552 kubelet[2237]: I0314 00:14:57.088311 2237 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:57.089552 kubelet[2237]: I0314 00:14:57.088627 2237 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:14:57.089552 kubelet[2237]: I0314 00:14:57.088643 2237 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:57.093493 kubelet[2237]: I0314 00:14:57.090672 2237 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:14:57.251081 kubelet[2237]: E0314 00:14:57.247064 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:57.266703 kubelet[2237]: I0314 00:14:57.260895 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:57.476023 kubelet[2237]: E0314 00:14:57.473562 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:57.476023 kubelet[2237]: I0314 00:14:57.473721 2237 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:57.581423 kubelet[2237]: I0314 00:14:57.580063 2237 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:14:57.597465 kubelet[2237]: I0314 00:14:57.594307 2237 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:57.598743 kubelet[2237]: I0314 00:14:57.596268 2237 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:14:57.599660 kubelet[2237]: I0314 00:14:57.599068 2237 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:14:57.599660 kubelet[2237]: I0314 00:14:57.599100 2237 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:14:57.601039 kubelet[2237]: I0314 00:14:57.600894 2237 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:14:57.608104 kubelet[2237]: I0314 00:14:57.607942 2237 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:14:57.609595 kubelet[2237]: I0314 00:14:57.609397 2237 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:14:57.609889 kubelet[2237]: I0314 00:14:57.609715 2237 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:57.612237 kubelet[2237]: I0314 00:14:57.612030 2237 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:14:57.628260 kubelet[2237]: I0314 00:14:57.614967 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:57.889330 kubelet[2237]: I0314 00:14:57.888732 2237 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:57.905548 kubelet[2237]: I0314 00:14:57.905361 2237 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:57.905548 kubelet[2237]: I0314 00:14:57.905471 2237 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:14:57.906015 kubelet[2237]: W0314 00:14:57.905850 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:14:57.934331 kubelet[2237]: I0314 00:14:57.932370 2237 server.go:1257] "Started kubelet" Mar 14 00:14:57.957527 kubelet[2237]: I0314 00:14:57.957270 2237 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:57.957677 kubelet[2237]: I0314 00:14:57.957645 2237 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:14:57.961974 kubelet[2237]: I0314 00:14:57.959384 2237 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:57.961974 kubelet[2237]: I0314 00:14:57.959680 2237 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:57.967256 kubelet[2237]: I0314 00:14:57.967020 2237 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:57.972257 kubelet[2237]: I0314 00:14:57.970967 2237 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:14:57.975973 kubelet[2237]: I0314 00:14:57.975339 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:57.998905 kubelet[2237]: E0314 00:14:57.993090 2237 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:57.998905 kubelet[2237]: E0314 00:14:57.995012 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:57.998905 kubelet[2237]: I0314 00:14:57.995383 2237 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:14:57.998905 kubelet[2237]: I0314 00:14:57.997503 2237 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:14:57.998905 kubelet[2237]: I0314 00:14:57.997882 2237 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:14:58.007703 kubelet[2237]: E0314 00:14:58.006407 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Mar 14 00:14:58.480653 kubelet[2237]: E0314 00:14:58.479881 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:58.489021 kubelet[2237]: I0314 00:14:58.484023 2237 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:58.489021 kubelet[2237]: I0314 00:14:58.484650 2237 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:58.495983 kubelet[2237]: E0314 00:14:58.493016 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Mar 14 00:14:58.519273 kubelet[2237]: E0314 00:14:58.515018 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8cebdd1ecc78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:14:57.93211916 +0000 UTC m=+1.936822834,LastTimestamp:2026-03-14 00:14:57.93211916 +0000 UTC m=+1.936822834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:14:58.522984 kubelet[2237]: I0314 00:14:58.521408 2237 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:58.582501 kubelet[2237]: E0314 00:14:58.580052 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:58.705438 kubelet[2237]: E0314 00:14:58.702700 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:58.980572 kubelet[2237]: E0314 00:14:58.974677 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:59.003433 kubelet[2237]: E0314 00:14:59.003292 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Mar 14 00:14:59.078005 kubelet[2237]: E0314 00:14:59.077410 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:59.193296 kubelet[2237]: E0314 00:14:59.180444 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:59.199485 kubelet[2237]: I0314 00:14:59.195600 2237 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:14:59.199485 kubelet[2237]: I0314 00:14:59.195626 2237 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:59.199485 kubelet[2237]: I0314 00:14:59.195720 2237 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:14:59.210098 kubelet[2237]: I0314 00:14:59.209817 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:59.216298 kubelet[2237]: I0314 00:14:59.216111 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:59.219238 kubelet[2237]: I0314 00:14:59.216521 2237 policy_none.go:50] "Start" Mar 14 00:14:59.219238 kubelet[2237]: I0314 00:14:59.216536 2237 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:14:59.219238 kubelet[2237]: I0314 00:14:59.216685 2237 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:14:59.219238 kubelet[2237]: I0314 00:14:59.217013 2237 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:14:59.219238 kubelet[2237]: E0314 00:14:59.217371 2237 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:59.219238 kubelet[2237]: I0314 00:14:59.217858 2237 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:14:59.229548 kubelet[2237]: I0314 00:14:59.229452 2237 policy_none.go:44] "Start" Mar 14 00:14:59.271482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:14:59.281907 kubelet[2237]: E0314 00:14:59.281031 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:59.390005 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:14:59.394923 kubelet[2237]: E0314 00:14:59.394702 2237 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:14:59.417109 kubelet[2237]: E0314 00:14:59.416112 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:14:59.428792 kubelet[2237]: E0314 00:14:59.428506 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:59.434453 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:14:59.471419 kubelet[2237]: E0314 00:14:59.471225 2237 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:59.475526 kubelet[2237]: I0314 00:14:59.473901 2237 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:14:59.475526 kubelet[2237]: I0314 00:14:59.474019 2237 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:59.485319 kubelet[2237]: I0314 00:14:59.478425 2237 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:14:59.498932 kubelet[2237]: E0314 00:14:59.496949 2237 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:59.498932 kubelet[2237]: E0314 00:14:59.497315 2237 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:14:59.783018 kubelet[2237]: I0314 00:14:59.760447 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:14:59.801966 kubelet[2237]: E0314 00:14:59.800304 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:14:59.807002 kubelet[2237]: E0314 00:14:59.806952 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="1.6s" Mar 14 00:14:59.822881 kubelet[2237]: I0314 00:14:59.822413 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:14:59.822881 kubelet[2237]: I0314 00:14:59.822465 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:14:59.822881 kubelet[2237]: I0314 00:14:59.822496 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:14:59.926932 kubelet[2237]: I0314 00:14:59.923019 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:14:59.926932 kubelet[2237]: I0314 00:14:59.923227 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:14:59.926932 kubelet[2237]: I0314 00:14:59.923264 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:14:59.926932 kubelet[2237]: I0314 00:14:59.923286 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:14:59.926932 kubelet[2237]: I0314 00:14:59.923316 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:14:59.924857 systemd[1]: Created slice kubepods-burstable-pod5ded2046bd35fb6afd7a11176668771c.slice - libcontainer container kubepods-burstable-pod5ded2046bd35fb6afd7a11176668771c.slice. Mar 14 00:14:59.927964 kubelet[2237]: I0314 00:14:59.923337 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:00.005973 kubelet[2237]: E0314 00:15:00.003397 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:00.010773 kubelet[2237]: I0314 00:15:00.009695 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:00.010773 kubelet[2237]: E0314 00:15:00.010494 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:15:00.014629 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 14 00:15:00.017898 kubelet[2237]: E0314 00:15:00.017658 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:00.021405 containerd[1479]: time="2026-03-14T00:15:00.021262586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ded2046bd35fb6afd7a11176668771c,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:00.069061 kubelet[2237]: E0314 00:15:00.067574 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:00.081551 kubelet[2237]: E0314 00:15:00.072063 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8cebdd1ecc78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:14:57.93211916 +0000 UTC m=+1.936822834,LastTimestamp:2026-03-14 00:14:57.93211916 +0000 UTC m=+1.936822834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:15:00.080993 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 14 00:15:00.095090 kubelet[2237]: E0314 00:15:00.090350 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:00.100341 kubelet[2237]: E0314 00:15:00.097567 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:00.104365 containerd[1479]: time="2026-03-14T00:15:00.103084414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:00.110296 kubelet[2237]: E0314 00:15:00.109013 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:00.110526 containerd[1479]: time="2026-03-14T00:15:00.109891392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:00.420570 kubelet[2237]: I0314 00:15:00.416383 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:00.425071 kubelet[2237]: E0314 00:15:00.421754 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:15:01.279050 kubelet[2237]: I0314 00:15:01.278638 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:01.282410 kubelet[2237]: E0314 00:15:01.280097 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:15:01.329240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264490035.mount: Deactivated successfully. Mar 14 00:15:01.373016 containerd[1479]: time="2026-03-14T00:15:01.372838403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:01.460560 kubelet[2237]: E0314 00:15:01.459762 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="3.2s" Mar 14 00:15:01.467820 containerd[1479]: time="2026-03-14T00:15:01.464375222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:15:01.467820 containerd[1479]: time="2026-03-14T00:15:01.465981565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:01.478627 containerd[1479]: time="2026-03-14T00:15:01.478453076Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:01.492256 containerd[1479]: time="2026-03-14T00:15:01.491421931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:01.494484 containerd[1479]: time="2026-03-14T00:15:01.494114750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:01.504342 containerd[1479]: time="2026-03-14T00:15:01.504266432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:15:01.522967 containerd[1479]: time="2026-03-14T00:15:01.521520337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:15:01.526901 containerd[1479]: time="2026-03-14T00:15:01.526255039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.422868143s" Mar 14 00:15:01.676240 containerd[1479]: time="2026-03-14T00:15:01.672854171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.651443185s" Mar 14 00:15:01.676240 containerd[1479]: time="2026-03-14T00:15:01.672917500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.562920547s" Mar 14 00:15:02.978952 kubelet[2237]: I0314 00:15:02.978106 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:02.982675 kubelet[2237]: E0314 00:15:02.980268 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:15:03.694064 containerd[1479]: time="2026-03-14T00:15:03.692523715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:03.694064 containerd[1479]: time="2026-03-14T00:15:03.692998121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:03.694064 containerd[1479]: time="2026-03-14T00:15:03.693234281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.697548 containerd[1479]: time="2026-03-14T00:15:03.694909509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.702248 containerd[1479]: time="2026-03-14T00:15:03.700977973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:03.702248 containerd[1479]: time="2026-03-14T00:15:03.701247476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:03.702248 containerd[1479]: time="2026-03-14T00:15:03.701272551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.702248 containerd[1479]: time="2026-03-14T00:15:03.701416194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.977943 containerd[1479]: time="2026-03-14T00:15:03.959055020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:03.981260 containerd[1479]: time="2026-03-14T00:15:03.977252635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:03.981260 containerd[1479]: time="2026-03-14T00:15:03.977428797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.981427 containerd[1479]: time="2026-03-14T00:15:03.981278511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:03.982333 kubelet[2237]: E0314 00:15:03.982272 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:15:04.326737 systemd[1]: Started cri-containerd-0179e3ab097e0c9b0495146047d500e3831f4e2941ccca344449d0a913d7ce9e.scope - libcontainer container 0179e3ab097e0c9b0495146047d500e3831f4e2941ccca344449d0a913d7ce9e. Mar 14 00:15:04.331379 systemd[1]: Started cri-containerd-b532ba7f1f1febca032652e3abfc5d2d216aa73b7f298b38ce9e7f8d9aa455c7.scope - libcontainer container b532ba7f1f1febca032652e3abfc5d2d216aa73b7f298b38ce9e7f8d9aa455c7. Mar 14 00:15:04.375105 systemd[1]: Started cri-containerd-cd789a166ed36c3cbec07fa6325831a72fd503cd0ea0c9ee3352eb03826df611.scope - libcontainer container cd789a166ed36c3cbec07fa6325831a72fd503cd0ea0c9ee3352eb03826df611. Mar 14 00:15:04.763728 kubelet[2237]: E0314 00:15:04.748691 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="6.4s" Mar 14 00:15:05.618746 containerd[1479]: time="2026-03-14T00:15:05.618564886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"b532ba7f1f1febca032652e3abfc5d2d216aa73b7f298b38ce9e7f8d9aa455c7\"" Mar 14 00:15:05.622893 containerd[1479]: time="2026-03-14T00:15:05.621622683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ded2046bd35fb6afd7a11176668771c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd789a166ed36c3cbec07fa6325831a72fd503cd0ea0c9ee3352eb03826df611\"" Mar 14 00:15:05.624429 kubelet[2237]: E0314 00:15:05.624262 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:05.627373 kubelet[2237]: E0314 00:15:05.626995 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:05.666033 containerd[1479]: time="2026-03-14T00:15:05.665925485Z" level=info msg="CreateContainer within sandbox \"cd789a166ed36c3cbec07fa6325831a72fd503cd0ea0c9ee3352eb03826df611\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:15:05.679066 containerd[1479]: time="2026-03-14T00:15:05.677887342Z" level=info msg="CreateContainer within sandbox \"b532ba7f1f1febca032652e3abfc5d2d216aa73b7f298b38ce9e7f8d9aa455c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:15:05.696727 containerd[1479]: time="2026-03-14T00:15:05.696455569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0179e3ab097e0c9b0495146047d500e3831f4e2941ccca344449d0a913d7ce9e\"" Mar 14 00:15:05.700260 kubelet[2237]: E0314 00:15:05.700114 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:06.085393 containerd[1479]: time="2026-03-14T00:15:06.084295824Z" level=info msg="CreateContainer within sandbox \"0179e3ab097e0c9b0495146047d500e3831f4e2941ccca344449d0a913d7ce9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:15:06.096417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009115206.mount: Deactivated successfully. Mar 14 00:15:06.151351 containerd[1479]: time="2026-03-14T00:15:06.150118385Z" level=info msg="CreateContainer within sandbox \"b532ba7f1f1febca032652e3abfc5d2d216aa73b7f298b38ce9e7f8d9aa455c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c86e7f0829a8ccfa0ba05c83338b0d457b247dbcfb8706bd0bafcbd70a7eaf53\"" Mar 14 00:15:06.162295 containerd[1479]: time="2026-03-14T00:15:06.159704322Z" level=info msg="StartContainer for \"c86e7f0829a8ccfa0ba05c83338b0d457b247dbcfb8706bd0bafcbd70a7eaf53\"" Mar 14 00:15:06.179963 containerd[1479]: time="2026-03-14T00:15:06.179782495Z" level=info msg="CreateContainer within sandbox \"cd789a166ed36c3cbec07fa6325831a72fd503cd0ea0c9ee3352eb03826df611\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a77b121c70d8e5b20416f42d833d565f53c2bedfd36c7f9971bc0b5283096ae0\"" Mar 14 00:15:06.181737 containerd[1479]: time="2026-03-14T00:15:06.181708083Z" level=info msg="StartContainer for \"a77b121c70d8e5b20416f42d833d565f53c2bedfd36c7f9971bc0b5283096ae0\"" Mar 14 00:15:06.185430 kubelet[2237]: I0314 00:15:06.185119 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:06.185956 kubelet[2237]: E0314 00:15:06.185921 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Mar 14 00:15:06.193213 containerd[1479]: time="2026-03-14T00:15:06.190282999Z" level=info msg="CreateContainer within sandbox \"0179e3ab097e0c9b0495146047d500e3831f4e2941ccca344449d0a913d7ce9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cb1f3781db42ef1b2df247e9807bad8bf591453aa981a06283bd5dbae7484d4\"" Mar 14 00:15:06.193213 containerd[1479]: time="2026-03-14T00:15:06.191668883Z" level=info msg="StartContainer for \"5cb1f3781db42ef1b2df247e9807bad8bf591453aa981a06283bd5dbae7484d4\"" Mar 14 00:15:06.522924 systemd[1]: Started cri-containerd-c86e7f0829a8ccfa0ba05c83338b0d457b247dbcfb8706bd0bafcbd70a7eaf53.scope - libcontainer container c86e7f0829a8ccfa0ba05c83338b0d457b247dbcfb8706bd0bafcbd70a7eaf53. Mar 14 00:15:06.558828 systemd[1]: Started cri-containerd-5cb1f3781db42ef1b2df247e9807bad8bf591453aa981a06283bd5dbae7484d4.scope - libcontainer container 5cb1f3781db42ef1b2df247e9807bad8bf591453aa981a06283bd5dbae7484d4. Mar 14 00:15:06.686865 systemd[1]: Started cri-containerd-a77b121c70d8e5b20416f42d833d565f53c2bedfd36c7f9971bc0b5283096ae0.scope - libcontainer container a77b121c70d8e5b20416f42d833d565f53c2bedfd36c7f9971bc0b5283096ae0. Mar 14 00:15:07.168280 containerd[1479]: time="2026-03-14T00:15:07.168220440Z" level=info msg="StartContainer for \"c86e7f0829a8ccfa0ba05c83338b0d457b247dbcfb8706bd0bafcbd70a7eaf53\" returns successfully" Mar 14 00:15:07.194181 containerd[1479]: time="2026-03-14T00:15:07.194035407Z" level=info msg="StartContainer for \"a77b121c70d8e5b20416f42d833d565f53c2bedfd36c7f9971bc0b5283096ae0\" returns successfully" Mar 14 00:15:07.225924 containerd[1479]: time="2026-03-14T00:15:07.225583854Z" level=info msg="StartContainer for \"5cb1f3781db42ef1b2df247e9807bad8bf591453aa981a06283bd5dbae7484d4\" returns successfully" Mar 14 00:15:08.279757 kubelet[2237]: E0314 00:15:08.279259 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:08.279757 kubelet[2237]: E0314 00:15:08.281082 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:08.284490 kubelet[2237]: E0314 00:15:08.284106 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:08.284554 kubelet[2237]: E0314 00:15:08.284535 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:08.324013 kubelet[2237]: E0314 00:15:08.323920 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:08.325243 kubelet[2237]: E0314 00:15:08.325070 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:09.503303 kubelet[2237]: E0314 00:15:09.502451 2237 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:15:09.619598 kubelet[2237]: E0314 00:15:09.617301 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:09.619598 kubelet[2237]: E0314 00:15:09.617470 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:09.619598 kubelet[2237]: E0314 00:15:09.617965 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:09.619598 kubelet[2237]: E0314 00:15:09.617999 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:09.818331 kubelet[2237]: E0314 00:15:09.718900 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:09.818331 kubelet[2237]: E0314 00:15:09.760243 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:10.705074 kubelet[2237]: E0314 00:15:10.704556 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:10.705074 kubelet[2237]: E0314 00:15:10.704955 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:10.710727 kubelet[2237]: E0314 00:15:10.709805 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:10.710727 kubelet[2237]: E0314 00:15:10.710449 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:11.661705 kubelet[2237]: E0314 00:15:11.659518 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:11.661705 kubelet[2237]: E0314 00:15:11.659913 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:12.682579 kubelet[2237]: I0314 00:15:12.678714 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:14.097667 kubelet[2237]: E0314 00:15:14.080875 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:14.105985 kubelet[2237]: E0314 00:15:14.100895 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:14.466553 kubelet[2237]: E0314 00:15:14.463768 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:15:14.469436 kubelet[2237]: E0314 00:15:14.464112 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:17.030521 kubelet[2237]: E0314 00:15:17.029922 2237 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:15:17.116691 kubelet[2237]: I0314 00:15:17.114710 2237 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:15:17.116691 kubelet[2237]: E0314 00:15:17.114750 2237 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:15:17.148253 kubelet[2237]: I0314 00:15:17.147883 2237 apiserver.go:52] "Watching apiserver" Mar 14 00:15:17.198547 kubelet[2237]: I0314 00:15:17.198431 2237 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:15:17.205648 kubelet[2237]: I0314 00:15:17.205584 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:17.233303 kubelet[2237]: E0314 00:15:17.233023 2237 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:17.233303 kubelet[2237]: I0314 00:15:17.233313 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:17.238684 kubelet[2237]: E0314 00:15:17.238652 2237 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:17.239022 kubelet[2237]: I0314 00:15:17.238835 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:17.244945 kubelet[2237]: E0314 00:15:17.244627 2237 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:21.686768 kubelet[2237]: I0314 00:15:21.686113 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:21.783975 kubelet[2237]: E0314 00:15:21.783890 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:25.815673 kubelet[2237]: E0314 00:15:25.815029 2237 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.438s" Mar 14 00:15:25.829687 kubelet[2237]: E0314 00:15:25.828496 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:25.908079 kubelet[2237]: I0314 00:15:25.906658 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:25.927738 kubelet[2237]: E0314 00:15:25.927692 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:26.087880 kubelet[2237]: I0314 00:15:26.083397 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.083293398 podStartE2EDuration="5.083293398s" podCreationTimestamp="2026-03-14 00:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:26.082547483 +0000 UTC m=+30.087251177" watchObservedRunningTime="2026-03-14 00:15:26.083293398 +0000 UTC m=+30.087997082" Mar 14 00:15:26.988210 kubelet[2237]: E0314 00:15:26.987519 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:29.868354 systemd[1]: Reloading requested from client PID 2536 ('systemctl') (unit session-5.scope)... Mar 14 00:15:29.868405 systemd[1]: Reloading... Mar 14 00:15:30.209997 zram_generator::config[2578]: No configuration found. Mar 14 00:15:30.912362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:15:31.241112 systemd[1]: Reloading finished in 1372 ms. Mar 14 00:15:31.328065 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:31.368054 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:15:31.368594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:31.368656 systemd[1]: kubelet.service: Consumed 17.144s CPU time, 132.2M memory peak, 0B memory swap peak. Mar 14 00:15:31.381851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:31.987384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:31.997568 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:15:32.178846 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:15:32.274094 kubelet[2619]: I0314 00:15:32.273591 2619 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:15:32.274094 kubelet[2619]: I0314 00:15:32.274102 2619 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:15:32.275022 kubelet[2619]: I0314 00:15:32.274683 2619 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:15:32.275022 kubelet[2619]: I0314 00:15:32.274705 2619 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:15:32.280213 kubelet[2619]: I0314 00:15:32.278267 2619 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:15:32.280663 kubelet[2619]: I0314 00:15:32.280599 2619 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:15:32.284867 kubelet[2619]: I0314 00:15:32.284696 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:15:32.299872 kubelet[2619]: E0314 00:15:32.299812 2619 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:15:32.300211 kubelet[2619]: I0314 00:15:32.300102 2619 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:15:32.317518 kubelet[2619]: I0314 00:15:32.317389 2619 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:15:32.318453 kubelet[2619]: I0314 00:15:32.318398 2619 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:15:32.318956 kubelet[2619]: I0314 00:15:32.318615 2619 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:15:32.319398 kubelet[2619]: I0314 00:15:32.319377 2619 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:15:32.319528 kubelet[2619]: I0314 00:15:32.319509 2619 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:15:32.319656 kubelet[2619]: I0314 00:15:32.319637 2619 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:15:32.320572 kubelet[2619]: I0314 00:15:32.320548 2619 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:15:32.321952 kubelet[2619]: I0314 00:15:32.321930 2619 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:15:32.322448 kubelet[2619]: I0314 00:15:32.322346 2619 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:15:32.324561 kubelet[2619]: I0314 00:15:32.324496 2619 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:15:32.324800 kubelet[2619]: I0314 00:15:32.324653 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:15:32.328562 kubelet[2619]: I0314 00:15:32.328249 2619 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:15:32.330868 kubelet[2619]: I0314 00:15:32.330791 2619 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:15:32.345638 kubelet[2619]: I0314 00:15:32.345601 2619 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:15:32.364804 kubelet[2619]: I0314 00:15:32.364724 2619 server.go:1257] "Started kubelet" Mar 14 00:15:32.369564 kubelet[2619]: I0314 00:15:32.369363 2619 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:15:32.369710 kubelet[2619]: I0314 00:15:32.369600 2619 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:15:32.371269 kubelet[2619]: I0314 00:15:32.371245 2619 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:15:32.375366 kubelet[2619]: I0314 00:15:32.375285 2619 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:15:32.375471 kubelet[2619]: I0314 00:15:32.375420 2619 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:15:32.377633 kubelet[2619]: I0314 00:15:32.377557 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:15:32.382438 kubelet[2619]: I0314 00:15:32.382344 2619 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:15:32.386371 kubelet[2619]: I0314 00:15:32.386283 2619 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:15:32.386811 kubelet[2619]: E0314 00:15:32.386708 2619 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:15:32.387875 kubelet[2619]: I0314 00:15:32.387041 2619 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:15:32.387875 kubelet[2619]: I0314 00:15:32.387467 2619 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:15:32.395433 kubelet[2619]: I0314 00:15:32.395305 2619 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:15:32.397179 kubelet[2619]: I0314 00:15:32.395461 2619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:15:32.402192 kubelet[2619]: I0314 00:15:32.401997 2619 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:15:32.412873 kubelet[2619]: E0314 00:15:32.412701 2619 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:15:32.461702 kubelet[2619]: I0314 00:15:32.461409 2619 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:15:32.471195 kubelet[2619]: I0314 00:15:32.466410 2619 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:15:32.471195 kubelet[2619]: I0314 00:15:32.466913 2619 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:15:32.471195 kubelet[2619]: I0314 00:15:32.466958 2619 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:15:32.471195 kubelet[2619]: E0314 00:15:32.467056 2619 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:15:32.555027 kubelet[2619]: I0314 00:15:32.554817 2619 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:15:32.555027 kubelet[2619]: I0314 00:15:32.554839 2619 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:15:32.555027 kubelet[2619]: I0314 00:15:32.554870 2619 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:15:32.555296 kubelet[2619]: I0314 00:15:32.555213 2619 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:15:32.555296 kubelet[2619]: I0314 00:15:32.555236 2619 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:15:32.555296 kubelet[2619]: I0314 00:15:32.555266 2619 policy_none.go:50] "Start" Mar 14 00:15:32.555296 kubelet[2619]: I0314 00:15:32.555279 2619 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:15:32.555400 kubelet[2619]: I0314 00:15:32.555298 2619 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:15:32.557220 kubelet[2619]: I0314 00:15:32.555464 2619 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:15:32.557220 kubelet[2619]: I0314 00:15:32.555548 2619 policy_none.go:44] "Start" Mar 14 00:15:32.568468 kubelet[2619]: E0314 00:15:32.568352 2619 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:15:32.569965 kubelet[2619]: E0314 00:15:32.569863 2619 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:15:32.573015 kubelet[2619]: I0314 00:15:32.570227 2619 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:15:32.573120 kubelet[2619]: I0314 00:15:32.572997 2619 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:15:32.573120 kubelet[2619]: I0314 00:15:32.573636 2619 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:15:32.576690 kubelet[2619]: E0314 00:15:32.575514 2619 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:15:32.739411 kubelet[2619]: I0314 00:15:32.734598 2619 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:15:32.776663 kubelet[2619]: I0314 00:15:32.775724 2619 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:32.779275 kubelet[2619]: I0314 00:15:32.778276 2619 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:32.779275 kubelet[2619]: I0314 00:15:32.778276 2619 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.801114 kubelet[2619]: I0314 00:15:32.801059 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:32.802572 kubelet[2619]: I0314 00:15:32.802027 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:32.802572 kubelet[2619]: I0314 00:15:32.802102 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.802572 kubelet[2619]: I0314 00:15:32.802235 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.802572 kubelet[2619]: I0314 00:15:32.802263 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.802572 kubelet[2619]: I0314 00:15:32.802299 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.802926 kubelet[2619]: I0314 00:15:32.802324 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.802926 kubelet[2619]: I0314 00:15:32.802357 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:32.802926 kubelet[2619]: I0314 00:15:32.802382 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ded2046bd35fb6afd7a11176668771c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ded2046bd35fb6afd7a11176668771c\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:15:32.819891 kubelet[2619]: I0314 00:15:32.809820 2619 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 14 00:15:32.819891 kubelet[2619]: I0314 00:15:32.810038 2619 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:15:32.905608 kubelet[2619]: E0314 00:15:32.904254 2619 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:15:32.911002 kubelet[2619]: E0314 00:15:32.910904 2619 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:15:32.911578 kubelet[2619]: E0314 00:15:32.911473 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:33.213248 kubelet[2619]: E0314 00:15:33.208865 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:33.213248 kubelet[2619]: E0314 00:15:33.208946 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:33.327712 kubelet[2619]: I0314 00:15:33.326987 2619 apiserver.go:52] "Watching apiserver" Mar 14 00:15:33.388498 kubelet[2619]: I0314 00:15:33.388312 2619 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:15:33.511985 kubelet[2619]: E0314 00:15:33.511467 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:33.511985 kubelet[2619]: E0314 00:15:33.511699 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:33.514242 kubelet[2619]: E0314 00:15:33.513458 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:34.714384 kubelet[2619]: E0314 00:15:34.713238 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:35.709587 kubelet[2619]: E0314 00:15:35.707913 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:36.356279 kubelet[2619]: I0314 00:15:36.355337 2619 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.355311229 podStartE2EDuration="4.355311229s" podCreationTimestamp="2026-03-14 00:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:35.474021606 +0000 UTC m=+3.464631682" watchObservedRunningTime="2026-03-14 00:15:36.355311229 +0000 UTC m=+4.345921274" Mar 14 00:15:36.366475 kubelet[2619]: I0314 00:15:36.363449 2619 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:15:36.366475 kubelet[2619]: I0314 00:15:36.365920 2619 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:15:36.367467 containerd[1479]: time="2026-03-14T00:15:36.364248245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:15:36.677298 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:36.686560 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:36.696239 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:57466.service: Deactivated successfully. Mar 14 00:15:36.701605 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:15:36.702633 systemd[1]: session-5.scope: Consumed 26.233s CPU time, 162.4M memory peak, 0B memory swap peak. Mar 14 00:15:36.705278 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:15:36.710350 systemd-logind[1465]: Removed session 5. Mar 14 00:15:38.662861 systemd[1]: Created slice kubepods-besteffort-pod402dae18_501f_48e2_861f_e4bc45305f2d.slice - libcontainer container kubepods-besteffort-pod402dae18_501f_48e2_861f_e4bc45305f2d.slice. Mar 14 00:15:38.674672 kubelet[2619]: I0314 00:15:38.673795 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/402dae18-501f-48e2-861f-e4bc45305f2d-kube-proxy\") pod \"kube-proxy-wjl6w\" (UID: \"402dae18-501f-48e2-861f-e4bc45305f2d\") " pod="kube-system/kube-proxy-wjl6w" Mar 14 00:15:38.674672 kubelet[2619]: I0314 00:15:38.673909 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402dae18-501f-48e2-861f-e4bc45305f2d-xtables-lock\") pod \"kube-proxy-wjl6w\" (UID: \"402dae18-501f-48e2-861f-e4bc45305f2d\") " pod="kube-system/kube-proxy-wjl6w" Mar 14 00:15:38.691989 systemd[1]: Created slice kubepods-burstable-pod9b031527_15ad_4546_a343_5111ee03c36c.slice - libcontainer container kubepods-burstable-pod9b031527_15ad_4546_a343_5111ee03c36c.slice. Mar 14 00:15:38.716734 kubelet[2619]: E0314 00:15:38.716037 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:38.775241 kubelet[2619]: I0314 00:15:38.774794 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b031527-15ad-4546-a343-5111ee03c36c-xtables-lock\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.775241 kubelet[2619]: I0314 00:15:38.774885 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6p4\" (UniqueName: \"kubernetes.io/projected/9b031527-15ad-4546-a343-5111ee03c36c-kube-api-access-4k6p4\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.775241 kubelet[2619]: I0314 00:15:38.774963 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402dae18-501f-48e2-861f-e4bc45305f2d-lib-modules\") pod \"kube-proxy-wjl6w\" (UID: \"402dae18-501f-48e2-861f-e4bc45305f2d\") " pod="kube-system/kube-proxy-wjl6w" Mar 14 00:15:38.775241 kubelet[2619]: I0314 00:15:38.774988 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9b031527-15ad-4546-a343-5111ee03c36c-run\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.775241 kubelet[2619]: I0314 00:15:38.775010 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9b031527-15ad-4546-a343-5111ee03c36c-cni\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.775706 kubelet[2619]: I0314 00:15:38.775047 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdqb\" (UniqueName: \"kubernetes.io/projected/402dae18-501f-48e2-861f-e4bc45305f2d-kube-api-access-xvdqb\") pod \"kube-proxy-wjl6w\" (UID: \"402dae18-501f-48e2-861f-e4bc45305f2d\") " pod="kube-system/kube-proxy-wjl6w" Mar 14 00:15:38.775706 kubelet[2619]: I0314 00:15:38.775071 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9b031527-15ad-4546-a343-5111ee03c36c-cni-plugin\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.775706 kubelet[2619]: I0314 00:15:38.775095 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9b031527-15ad-4546-a343-5111ee03c36c-flannel-cfg\") pod \"kube-flannel-ds-zn27z\" (UID: \"9b031527-15ad-4546-a343-5111ee03c36c\") " pod="kube-flannel/kube-flannel-ds-zn27z" Mar 14 00:15:38.994117 kubelet[2619]: E0314 00:15:38.993817 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:38.998980 containerd[1479]: time="2026-03-14T00:15:38.996993466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wjl6w,Uid:402dae18-501f-48e2-861f-e4bc45305f2d,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:39.011551 kubelet[2619]: E0314 00:15:39.010890 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:39.012421 containerd[1479]: time="2026-03-14T00:15:39.012294147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zn27z,Uid:9b031527-15ad-4546-a343-5111ee03c36c,Namespace:kube-flannel,Attempt:0,}" Mar 14 00:15:39.084276 containerd[1479]: time="2026-03-14T00:15:39.083669770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:39.084276 containerd[1479]: time="2026-03-14T00:15:39.083715555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:39.084276 containerd[1479]: time="2026-03-14T00:15:39.083731004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:39.084276 containerd[1479]: time="2026-03-14T00:15:39.084003595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:39.084808 containerd[1479]: time="2026-03-14T00:15:39.083375653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:39.084808 containerd[1479]: time="2026-03-14T00:15:39.083720969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:39.084808 containerd[1479]: time="2026-03-14T00:15:39.083743420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:39.085575 containerd[1479]: time="2026-03-14T00:15:39.085447319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:39.132022 systemd[1]: Started cri-containerd-075093f83b2ebcd39e0eb9411be591e7f3919d19b883bd3247750789f55d21a9.scope - libcontainer container 075093f83b2ebcd39e0eb9411be591e7f3919d19b883bd3247750789f55d21a9. Mar 14 00:15:39.137031 systemd[1]: Started cri-containerd-487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34.scope - libcontainer container 487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34. Mar 14 00:15:39.209263 containerd[1479]: time="2026-03-14T00:15:39.207837534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wjl6w,Uid:402dae18-501f-48e2-861f-e4bc45305f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"075093f83b2ebcd39e0eb9411be591e7f3919d19b883bd3247750789f55d21a9\"" Mar 14 00:15:39.210572 containerd[1479]: time="2026-03-14T00:15:39.210353230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zn27z,Uid:9b031527-15ad-4546-a343-5111ee03c36c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\"" Mar 14 00:15:39.213991 kubelet[2619]: E0314 00:15:39.212519 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:39.213991 kubelet[2619]: E0314 00:15:39.213351 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:39.217729 containerd[1479]: time="2026-03-14T00:15:39.217659462Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 14 00:15:39.224423 containerd[1479]: time="2026-03-14T00:15:39.224197409Z" level=info msg="CreateContainer within sandbox \"075093f83b2ebcd39e0eb9411be591e7f3919d19b883bd3247750789f55d21a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:15:39.282240 containerd[1479]: time="2026-03-14T00:15:39.281993946Z" level=info msg="CreateContainer within sandbox \"075093f83b2ebcd39e0eb9411be591e7f3919d19b883bd3247750789f55d21a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65b455e410db36d1021075c58380d58463257b396a4b2a03698dd1dcc5df32bf\"" Mar 14 00:15:39.284721 containerd[1479]: time="2026-03-14T00:15:39.284585701Z" level=info msg="StartContainer for \"65b455e410db36d1021075c58380d58463257b396a4b2a03698dd1dcc5df32bf\"" Mar 14 00:15:39.365042 systemd[1]: Started cri-containerd-65b455e410db36d1021075c58380d58463257b396a4b2a03698dd1dcc5df32bf.scope - libcontainer container 65b455e410db36d1021075c58380d58463257b396a4b2a03698dd1dcc5df32bf. Mar 14 00:15:39.429447 containerd[1479]: time="2026-03-14T00:15:39.429241320Z" level=info msg="StartContainer for \"65b455e410db36d1021075c58380d58463257b396a4b2a03698dd1dcc5df32bf\" returns successfully" Mar 14 00:15:39.745659 kubelet[2619]: E0314 00:15:39.744441 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:39.775922 kubelet[2619]: I0314 00:15:39.775093 2619 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-wjl6w" podStartSLOduration=2.775075288 podStartE2EDuration="2.775075288s" podCreationTimestamp="2026-03-14 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:39.775062465 +0000 UTC m=+7.765672532" watchObservedRunningTime="2026-03-14 00:15:39.775075288 +0000 UTC m=+7.765685334" Mar 14 00:15:40.048668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222067820.mount: Deactivated successfully. Mar 14 00:15:40.661876 containerd[1479]: time="2026-03-14T00:15:40.396092273Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:41.474319 containerd[1479]: time="2026-03-14T00:15:40.722984904Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 14 00:15:42.612576 containerd[1479]: time="2026-03-14T00:15:42.610258717Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:42.935717 containerd[1479]: time="2026-03-14T00:15:42.932691425Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.714990616s" Mar 14 00:15:42.935717 containerd[1479]: time="2026-03-14T00:15:42.932742759Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 14 00:15:42.935717 containerd[1479]: time="2026-03-14T00:15:42.935617293Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:43.004666 containerd[1479]: time="2026-03-14T00:15:43.002717092Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 14 00:15:43.062642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855252165.mount: Deactivated successfully. Mar 14 00:15:43.071817 containerd[1479]: time="2026-03-14T00:15:43.071369265Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1\"" Mar 14 00:15:43.073393 containerd[1479]: time="2026-03-14T00:15:43.072584495Z" level=info msg="StartContainer for \"36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1\"" Mar 14 00:15:43.220683 systemd[1]: run-containerd-runc-k8s.io-36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1-runc.d4bT2g.mount: Deactivated successfully. Mar 14 00:15:43.243840 systemd[1]: Started cri-containerd-36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1.scope - libcontainer container 36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1. Mar 14 00:15:45.430756 containerd[1479]: time="2026-03-14T00:15:45.427387122Z" level=error msg="get state for 36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1" error="context deadline exceeded: unknown" Mar 14 00:15:45.430756 containerd[1479]: time="2026-03-14T00:15:45.427945420Z" level=warning msg="unknown status" status=0 Mar 14 00:15:46.195821 containerd[1479]: time="2026-03-14T00:15:46.195307050Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 14 00:15:46.231614 containerd[1479]: time="2026-03-14T00:15:46.231542106Z" level=info msg="StartContainer for \"36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1\" returns successfully" Mar 14 00:15:46.257824 systemd[1]: cri-containerd-36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1.scope: Deactivated successfully. Mar 14 00:15:46.360531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1-rootfs.mount: Deactivated successfully. Mar 14 00:15:46.968811 containerd[1479]: time="2026-03-14T00:15:46.968581515Z" level=info msg="shim disconnected" id=36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1 namespace=k8s.io Mar 14 00:15:46.968811 containerd[1479]: time="2026-03-14T00:15:46.968632800Z" level=warning msg="cleaning up after shim disconnected" id=36daafd81120e3d3ea6e9cbf445fe4a4797f5066e67c4c5e6ad51069ce0c49f1 namespace=k8s.io Mar 14 00:15:46.968811 containerd[1479]: time="2026-03-14T00:15:46.968650312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:46.985028 kubelet[2619]: E0314 00:15:46.984958 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:47.995521 kubelet[2619]: E0314 00:15:47.994888 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:48.004974 containerd[1479]: time="2026-03-14T00:15:48.004776094Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 14 00:15:55.810496 containerd[1479]: time="2026-03-14T00:15:55.809775539Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:55.819718 containerd[1479]: time="2026-03-14T00:15:55.819592511Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 14 00:15:55.854278 containerd[1479]: time="2026-03-14T00:15:55.850944821Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:55.929956 containerd[1479]: time="2026-03-14T00:15:55.926807152Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:55.948528 containerd[1479]: time="2026-03-14T00:15:55.931838764Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 7.926762416s" Mar 14 00:15:55.948528 containerd[1479]: time="2026-03-14T00:15:55.931899016Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 14 00:15:56.018770 containerd[1479]: time="2026-03-14T00:15:56.017085951Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:15:56.118220 containerd[1479]: time="2026-03-14T00:15:56.117536572Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f\"" Mar 14 00:15:56.123749 containerd[1479]: time="2026-03-14T00:15:56.123652162Z" level=info msg="StartContainer for \"cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f\"" Mar 14 00:15:56.994522 systemd[1]: Started cri-containerd-cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f.scope - libcontainer container cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f. Mar 14 00:15:57.729318 systemd[1]: cri-containerd-cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f.scope: Deactivated successfully. Mar 14 00:15:57.730467 containerd[1479]: time="2026-03-14T00:15:57.730077946Z" level=info msg="StartContainer for \"cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f\" returns successfully" Mar 14 00:15:57.781061 kubelet[2619]: I0314 00:15:57.775080 2619 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:15:57.926990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f-rootfs.mount: Deactivated successfully. Mar 14 00:15:58.281621 containerd[1479]: time="2026-03-14T00:15:58.280883410Z" level=info msg="shim disconnected" id=cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f namespace=k8s.io Mar 14 00:15:58.282997 containerd[1479]: time="2026-03-14T00:15:58.282837328Z" level=warning msg="cleaning up after shim disconnected" id=cf2238af1195d12562db713797f5c2c0973ce7dc415fd5416e68d6df928cd17f namespace=k8s.io Mar 14 00:15:58.282997 containerd[1479]: time="2026-03-14T00:15:58.282903409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:58.311677 systemd[1]: Created slice kubepods-burstable-pode1417159_85db_4564_8126_a065f196eb25.slice - libcontainer container kubepods-burstable-pode1417159_85db_4564_8126_a065f196eb25.slice. Mar 14 00:15:58.370085 kubelet[2619]: I0314 00:15:58.363073 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ca66a2-71af-42c2-b0ef-66c41d29d1a5-config-volume\") pod \"coredns-7d764666f9-lgwpv\" (UID: \"22ca66a2-71af-42c2-b0ef-66c41d29d1a5\") " pod="kube-system/coredns-7d764666f9-lgwpv" Mar 14 00:15:58.370085 kubelet[2619]: I0314 00:15:58.368467 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1417159-85db-4564-8126-a065f196eb25-config-volume\") pod \"coredns-7d764666f9-l6nfw\" (UID: \"e1417159-85db-4564-8126-a065f196eb25\") " pod="kube-system/coredns-7d764666f9-l6nfw" Mar 14 00:15:58.370085 kubelet[2619]: I0314 00:15:58.368541 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb84c\" (UniqueName: \"kubernetes.io/projected/e1417159-85db-4564-8126-a065f196eb25-kube-api-access-xb84c\") pod \"coredns-7d764666f9-l6nfw\" (UID: \"e1417159-85db-4564-8126-a065f196eb25\") " pod="kube-system/coredns-7d764666f9-l6nfw" Mar 14 00:15:58.370085 kubelet[2619]: I0314 00:15:58.368694 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghzh\" (UniqueName: \"kubernetes.io/projected/22ca66a2-71af-42c2-b0ef-66c41d29d1a5-kube-api-access-vghzh\") pod \"coredns-7d764666f9-lgwpv\" (UID: \"22ca66a2-71af-42c2-b0ef-66c41d29d1a5\") " pod="kube-system/coredns-7d764666f9-lgwpv" Mar 14 00:15:58.384683 systemd[1]: Created slice kubepods-burstable-pod22ca66a2_71af_42c2_b0ef_66c41d29d1a5.slice - libcontainer container kubepods-burstable-pod22ca66a2_71af_42c2_b0ef_66c41d29d1a5.slice. Mar 14 00:15:58.535624 kubelet[2619]: E0314 00:15:58.529246 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:58.565650 containerd[1479]: time="2026-03-14T00:15:58.565555386Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 14 00:15:58.620654 containerd[1479]: time="2026-03-14T00:15:58.619793625Z" level=info msg="CreateContainer within sandbox \"487ae50a029412830b619c61bb7065ed3e33bdc71a6537f8a0f4968b24d2ae34\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b4207910802b529381e66ba56ef452318093459ec35baa3804aa6e327bd5be1c\"" Mar 14 00:15:58.627956 containerd[1479]: time="2026-03-14T00:15:58.621454220Z" level=info msg="StartContainer for \"b4207910802b529381e66ba56ef452318093459ec35baa3804aa6e327bd5be1c\"" Mar 14 00:15:58.688243 kubelet[2619]: E0314 00:15:58.687794 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:58.692432 containerd[1479]: time="2026-03-14T00:15:58.691764551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-l6nfw,Uid:e1417159-85db-4564-8126-a065f196eb25,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:58.706951 kubelet[2619]: E0314 00:15:58.706911 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:58.713997 containerd[1479]: time="2026-03-14T00:15:58.712940984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lgwpv,Uid:22ca66a2-71af-42c2-b0ef-66c41d29d1a5,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:58.820421 systemd[1]: Started cri-containerd-b4207910802b529381e66ba56ef452318093459ec35baa3804aa6e327bd5be1c.scope - libcontainer container b4207910802b529381e66ba56ef452318093459ec35baa3804aa6e327bd5be1c. Mar 14 00:15:59.137756 systemd[1]: run-netns-cni\x2deb0f4df8\x2d0b56\x2d3754\x2d2adb\x2d7b92b512cb8b.mount: Deactivated successfully. Mar 14 00:15:59.158118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d-shm.mount: Deactivated successfully. Mar 14 00:15:59.169996 containerd[1479]: time="2026-03-14T00:15:59.169757401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lgwpv,Uid:22ca66a2-71af-42c2-b0ef-66c41d29d1a5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 14 00:15:59.182933 systemd[1]: run-netns-cni\x2d8466e786\x2db658\x2d17eb\x2d6b1d\x2d47655c6b123a.mount: Deactivated successfully. Mar 14 00:15:59.212421 kubelet[2619]: E0314 00:15:59.209965 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 14 00:15:59.212421 kubelet[2619]: E0314 00:15:59.210351 2619 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-lgwpv" Mar 14 00:15:59.212421 kubelet[2619]: E0314 00:15:59.210431 2619 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-lgwpv" Mar 14 00:15:59.212421 kubelet[2619]: E0314 00:15:59.210594 2619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-lgwpv_kube-system(22ca66a2-71af-42c2-b0ef-66c41d29d1a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-lgwpv_kube-system(22ca66a2-71af-42c2-b0ef-66c41d29d1a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6851a546444b96170079913913fc1ad9f1037f96bd6adbdf4594b603dbbf690d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-lgwpv" podUID="22ca66a2-71af-42c2-b0ef-66c41d29d1a5" Mar 14 00:15:59.211427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e-shm.mount: Deactivated successfully. Mar 14 00:15:59.262846 containerd[1479]: time="2026-03-14T00:15:59.259364376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-l6nfw,Uid:e1417159-85db-4564-8126-a065f196eb25,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 14 00:15:59.263992 kubelet[2619]: E0314 00:15:59.263832 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 14 00:15:59.265053 kubelet[2619]: E0314 00:15:59.265017 2619 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-l6nfw" Mar 14 00:15:59.265453 kubelet[2619]: E0314 00:15:59.265419 2619 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-l6nfw" Mar 14 00:15:59.265781 kubelet[2619]: E0314 00:15:59.265667 2619 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-l6nfw_kube-system(e1417159-85db-4564-8126-a065f196eb25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-l6nfw_kube-system(e1417159-85db-4564-8126-a065f196eb25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38d09ecc244b459fad2a2b1736ee783148ab46a27d4e882947371185c86e51e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-l6nfw" podUID="e1417159-85db-4564-8126-a065f196eb25" Mar 14 00:15:59.287981 containerd[1479]: time="2026-03-14T00:15:59.287831691Z" level=info msg="StartContainer for \"b4207910802b529381e66ba56ef452318093459ec35baa3804aa6e327bd5be1c\" returns successfully" Mar 14 00:15:59.668652 kubelet[2619]: E0314 00:15:59.665992 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:15:59.740833 kubelet[2619]: I0314 00:15:59.739017 2619 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zn27z" podStartSLOduration=3.416877375 podStartE2EDuration="22.731647062s" podCreationTimestamp="2026-03-14 00:15:37 +0000 UTC" firstStartedPulling="2026-03-14 00:15:39.216960491 +0000 UTC m=+7.207570547" lastFinishedPulling="2026-03-14 00:15:58.531730178 +0000 UTC m=+26.522340234" observedRunningTime="2026-03-14 00:15:59.731364099 +0000 UTC m=+27.721974155" watchObservedRunningTime="2026-03-14 00:15:59.731647062 +0000 UTC m=+27.722257108" Mar 14 00:16:00.686466 kubelet[2619]: E0314 00:16:00.685645 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:00.831342 systemd-networkd[1407]: flannel.1: Link UP Mar 14 00:16:00.831399 systemd-networkd[1407]: flannel.1: Gained carrier Mar 14 00:16:01.954407 systemd-networkd[1407]: flannel.1: Gained IPv6LL Mar 14 00:16:12.477793 kubelet[2619]: E0314 00:16:12.477580 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:12.478708 containerd[1479]: time="2026-03-14T00:16:12.478363327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lgwpv,Uid:22ca66a2-71af-42c2-b0ef-66c41d29d1a5,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:12.614254 systemd-networkd[1407]: cni0: Link UP Mar 14 00:16:12.614267 systemd-networkd[1407]: cni0: Gained carrier Mar 14 00:16:12.628539 systemd-networkd[1407]: cni0: Lost carrier Mar 14 00:16:12.684012 systemd-networkd[1407]: vethf72745bf: Link UP Mar 14 00:16:12.691259 kernel: cni0: port 1(vethf72745bf) entered blocking state Mar 14 00:16:12.691613 kernel: cni0: port 1(vethf72745bf) entered disabled state Mar 14 00:16:12.691653 kernel: vethf72745bf: entered allmulticast mode Mar 14 00:16:12.700244 kernel: vethf72745bf: entered promiscuous mode Mar 14 00:16:12.709120 kernel: cni0: port 1(vethf72745bf) entered blocking state Mar 14 00:16:12.709337 kernel: cni0: port 1(vethf72745bf) entered forwarding state Mar 14 00:16:12.714464 kernel: cni0: port 1(vethf72745bf) entered disabled state Mar 14 00:16:12.781715 kernel: cni0: port 1(vethf72745bf) entered blocking state Mar 14 00:16:12.781896 kernel: cni0: port 1(vethf72745bf) entered forwarding state Mar 14 00:16:12.782657 systemd-networkd[1407]: vethf72745bf: Gained carrier Mar 14 00:16:12.784388 systemd-networkd[1407]: cni0: Gained carrier Mar 14 00:16:12.803982 containerd[1479]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 14 00:16:12.803982 containerd[1479]: delegateAdd: netconf sent to delegate plugin: Mar 14 00:16:12.980285 containerd[1479]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-14T00:16:12.979462358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:12.980285 containerd[1479]: time="2026-03-14T00:16:12.979630650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:12.980285 containerd[1479]: time="2026-03-14T00:16:12.979685271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:12.980285 containerd[1479]: time="2026-03-14T00:16:12.979801195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:13.087343 systemd[1]: run-containerd-runc-k8s.io-f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399-runc.r8PXrf.mount: Deactivated successfully. Mar 14 00:16:13.104481 systemd[1]: Started cri-containerd-f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399.scope - libcontainer container f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399. Mar 14 00:16:13.162096 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:16:13.224615 containerd[1479]: time="2026-03-14T00:16:13.224509202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lgwpv,Uid:22ca66a2-71af-42c2-b0ef-66c41d29d1a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399\"" Mar 14 00:16:13.226089 kubelet[2619]: E0314 00:16:13.225715 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:13.260999 containerd[1479]: time="2026-03-14T00:16:13.260430716Z" level=info msg="CreateContainer within sandbox \"f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:16:13.318834 containerd[1479]: time="2026-03-14T00:16:13.313451224Z" level=info msg="CreateContainer within sandbox \"f1ac758c0c0df7c4105a3431c3ce06c8886cde284846053d33db906c09769399\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66b456efb71024af298e4324e0585307b110191f77e8cb97b3c2393ee28ecb57\"" Mar 14 00:16:13.318834 containerd[1479]: time="2026-03-14T00:16:13.314759562Z" level=info msg="StartContainer for \"66b456efb71024af298e4324e0585307b110191f77e8cb97b3c2393ee28ecb57\"" Mar 14 00:16:13.477006 systemd[1]: Started cri-containerd-66b456efb71024af298e4324e0585307b110191f77e8cb97b3c2393ee28ecb57.scope - libcontainer container 66b456efb71024af298e4324e0585307b110191f77e8cb97b3c2393ee28ecb57. Mar 14 00:16:13.644293 containerd[1479]: time="2026-03-14T00:16:13.643822020Z" level=info msg="StartContainer for \"66b456efb71024af298e4324e0585307b110191f77e8cb97b3c2393ee28ecb57\" returns successfully" Mar 14 00:16:13.869879 kubelet[2619]: E0314 00:16:13.863652 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:13.971081 kubelet[2619]: I0314 00:16:13.970656 2619 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lgwpv" podStartSLOduration=36.970636011 podStartE2EDuration="36.970636011s" podCreationTimestamp="2026-03-14 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:13.963321176 +0000 UTC m=+41.953931252" watchObservedRunningTime="2026-03-14 00:16:13.970636011 +0000 UTC m=+41.961246057" Mar 14 00:16:14.175722 systemd-networkd[1407]: vethf72745bf: Gained IPv6LL Mar 14 00:16:14.496636 kubelet[2619]: E0314 00:16:14.494663 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:14.516933 containerd[1479]: time="2026-03-14T00:16:14.516423404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-l6nfw,Uid:e1417159-85db-4564-8126-a065f196eb25,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:14.560454 systemd-networkd[1407]: cni0: Gained IPv6LL Mar 14 00:16:15.067753 kubelet[2619]: E0314 00:16:15.067558 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:15.085930 systemd-networkd[1407]: veth9e7b15ee: Link UP Mar 14 00:16:15.103622 kernel: cni0: port 2(veth9e7b15ee) entered blocking state Mar 14 00:16:15.103790 kernel: cni0: port 2(veth9e7b15ee) entered disabled state Mar 14 00:16:15.118533 kernel: veth9e7b15ee: entered allmulticast mode Mar 14 00:16:15.118671 kernel: veth9e7b15ee: entered promiscuous mode Mar 14 00:16:15.196821 kernel: cni0: port 2(veth9e7b15ee) entered blocking state Mar 14 00:16:15.197353 kernel: cni0: port 2(veth9e7b15ee) entered forwarding state Mar 14 00:16:15.197579 systemd-networkd[1407]: veth9e7b15ee: Gained carrier Mar 14 00:16:15.229944 containerd[1479]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 14 00:16:15.229944 containerd[1479]: delegateAdd: netconf sent to delegate plugin: Mar 14 00:16:15.453475 containerd[1479]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-14T00:16:15.450216397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:15.453475 containerd[1479]: time="2026-03-14T00:16:15.450384618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:15.453475 containerd[1479]: time="2026-03-14T00:16:15.450408511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:15.453475 containerd[1479]: time="2026-03-14T00:16:15.450681837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:15.509881 systemd[1]: run-containerd-runc-k8s.io-ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560-runc.cP42TG.mount: Deactivated successfully. Mar 14 00:16:15.533779 systemd[1]: Started cri-containerd-ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560.scope - libcontainer container ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560. Mar 14 00:16:15.577936 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:16:15.663368 containerd[1479]: time="2026-03-14T00:16:15.663260161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-l6nfw,Uid:e1417159-85db-4564-8126-a065f196eb25,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560\"" Mar 14 00:16:15.664969 kubelet[2619]: E0314 00:16:15.664803 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:15.686249 containerd[1479]: time="2026-03-14T00:16:15.685824517Z" level=info msg="CreateContainer within sandbox \"ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:16:15.755899 containerd[1479]: time="2026-03-14T00:16:15.755620118Z" level=info msg="CreateContainer within sandbox \"ef53b55a99be15858179a0b350835a1fe7082b948f32093c9cca4ff0328aa560\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a4d912b71b1bd6c15aff1bba1893f86fbde184c214e2882754ca0c9ed2fd0d4\"" Mar 14 00:16:15.756938 containerd[1479]: time="2026-03-14T00:16:15.756855742Z" level=info msg="StartContainer for \"2a4d912b71b1bd6c15aff1bba1893f86fbde184c214e2882754ca0c9ed2fd0d4\"" Mar 14 00:16:15.978829 systemd[1]: Started cri-containerd-2a4d912b71b1bd6c15aff1bba1893f86fbde184c214e2882754ca0c9ed2fd0d4.scope - libcontainer container 2a4d912b71b1bd6c15aff1bba1893f86fbde184c214e2882754ca0c9ed2fd0d4. Mar 14 00:16:16.109078 kubelet[2619]: E0314 00:16:16.108853 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:16.152907 containerd[1479]: time="2026-03-14T00:16:16.152574905Z" level=info msg="StartContainer for \"2a4d912b71b1bd6c15aff1bba1893f86fbde184c214e2882754ca0c9ed2fd0d4\" returns successfully" Mar 14 00:16:17.126358 kubelet[2619]: E0314 00:16:17.124588 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:17.189288 kubelet[2619]: I0314 00:16:17.184949 2619 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-l6nfw" podStartSLOduration=40.184930319 podStartE2EDuration="40.184930319s" podCreationTimestamp="2026-03-14 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:17.180589076 +0000 UTC m=+45.171199152" watchObservedRunningTime="2026-03-14 00:16:17.184930319 +0000 UTC m=+45.175540395" Mar 14 00:16:17.248533 systemd-networkd[1407]: veth9e7b15ee: Gained IPv6LL Mar 14 00:16:18.130106 kubelet[2619]: E0314 00:16:18.129373 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:19.162291 kubelet[2619]: E0314 00:16:19.160586 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:39.471546 kubelet[2619]: E0314 00:16:39.469293 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:48.374226 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:45682.service - OpenSSH per-connection server daemon (10.0.0.1:45682). Mar 14 00:16:48.478881 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:16:48.482120 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:48.498535 systemd-logind[1465]: New session 6 of user core. Mar 14 00:16:48.515739 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:16:48.798000 sshd[3710]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:48.804749 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:45682.service: Deactivated successfully. Mar 14 00:16:48.808083 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:16:48.812985 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:16:48.817965 systemd-logind[1465]: Removed session 6. Mar 14 00:16:51.470118 kubelet[2619]: E0314 00:16:51.469060 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:53.474723 kubelet[2619]: E0314 00:16:53.474229 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:53.849112 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:42848.service - OpenSSH per-connection server daemon (10.0.0.1:42848). Mar 14 00:16:54.021997 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 42848 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:16:54.030897 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:54.053100 systemd-logind[1465]: New session 7 of user core. Mar 14 00:16:54.074925 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:16:54.397482 sshd[3745]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:54.407566 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:42848.service: Deactivated successfully. Mar 14 00:16:54.411908 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:16:54.414841 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:16:54.418061 systemd-logind[1465]: Removed session 7. Mar 14 00:16:58.472343 kubelet[2619]: E0314 00:16:58.471936 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:59.418532 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:42862.service - OpenSSH per-connection server daemon (10.0.0.1:42862). Mar 14 00:16:59.502907 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 42862 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:16:59.506545 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:59.551303 systemd-logind[1465]: New session 8 of user core. Mar 14 00:16:59.560946 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:16:59.841334 sshd[3781]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:59.857670 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:42862.service: Deactivated successfully. Mar 14 00:16:59.868047 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:16:59.871485 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:16:59.875384 systemd-logind[1465]: Removed session 8. Mar 14 00:17:04.850488 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:37662.service - OpenSSH per-connection server daemon (10.0.0.1:37662). Mar 14 00:17:04.922407 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 37662 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:04.928625 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:04.948458 systemd-logind[1465]: New session 9 of user core. Mar 14 00:17:04.957656 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:17:05.161400 sshd[3816]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:05.176490 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:37662.service: Deactivated successfully. Mar 14 00:17:05.181710 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:05.186848 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:05.198097 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:37664.service - OpenSSH per-connection server daemon (10.0.0.1:37664). Mar 14 00:17:05.200436 systemd-logind[1465]: Removed session 9. Mar 14 00:17:05.259237 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 37664 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:05.263461 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:05.277265 systemd-logind[1465]: New session 10 of user core. Mar 14 00:17:05.294558 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:17:05.497264 kubelet[2619]: E0314 00:17:05.490114 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:05.844710 sshd[3832]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:05.910707 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:37670.service - OpenSSH per-connection server daemon (10.0.0.1:37670). Mar 14 00:17:05.912707 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:37664.service: Deactivated successfully. Mar 14 00:17:05.917659 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:17:05.922882 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:17:05.930370 systemd-logind[1465]: Removed session 10. Mar 14 00:17:06.029812 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 37670 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:06.031987 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:06.077743 systemd-logind[1465]: New session 11 of user core. Mar 14 00:17:06.089632 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:17:06.391037 sshd[3842]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:06.406995 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:17:06.409086 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:37670.service: Deactivated successfully. Mar 14 00:17:06.412839 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:17:06.416796 systemd-logind[1465]: Removed session 11. Mar 14 00:17:11.456851 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:51742.service - OpenSSH per-connection server daemon (10.0.0.1:51742). Mar 14 00:17:11.655846 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:11.674648 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:11.700564 systemd-logind[1465]: New session 12 of user core. Mar 14 00:17:11.724313 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:17:12.275647 sshd[3883]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:12.311273 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:51742.service: Deactivated successfully. Mar 14 00:17:12.322859 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:17:12.325398 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:17:12.328841 systemd-logind[1465]: Removed session 12. Mar 14 00:17:17.308016 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Mar 14 00:17:17.364836 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:17.369667 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:17.381562 systemd-logind[1465]: New session 13 of user core. Mar 14 00:17:17.392639 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:17:17.605586 sshd[3924]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:17.625438 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:51750.service: Deactivated successfully. Mar 14 00:17:17.630513 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:17:17.634663 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:17:17.646349 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Mar 14 00:17:17.650436 systemd-logind[1465]: Removed session 13. Mar 14 00:17:17.707755 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:17.712760 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:17.727382 systemd-logind[1465]: New session 14 of user core. Mar 14 00:17:17.746990 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:17:18.187507 sshd[3952]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:18.204953 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:51760.service: Deactivated successfully. Mar 14 00:17:18.209759 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:17:18.213269 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:17:18.232913 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:51774.service - OpenSSH per-connection server daemon (10.0.0.1:51774). Mar 14 00:17:18.236546 systemd-logind[1465]: Removed session 14. Mar 14 00:17:18.291308 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 51774 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:18.292684 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:18.308815 systemd-logind[1465]: New session 15 of user core. Mar 14 00:17:18.315632 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:17:19.940448 sshd[3965]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:19.977231 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:51774.service: Deactivated successfully. Mar 14 00:17:19.980394 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:17:19.980892 systemd[1]: session-15.scope: Consumed 1.060s CPU time. Mar 14 00:17:19.989376 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:17:20.006707 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:51778.service - OpenSSH per-connection server daemon (10.0.0.1:51778). Mar 14 00:17:20.011244 systemd-logind[1465]: Removed session 15. Mar 14 00:17:20.098394 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 51778 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:20.100785 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:20.114431 systemd-logind[1465]: New session 16 of user core. Mar 14 00:17:20.125536 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:17:20.605553 sshd[3984]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:20.632507 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:51778.service: Deactivated successfully. Mar 14 00:17:20.636861 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:17:20.644366 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:17:20.667396 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:37602.service - OpenSSH per-connection server daemon (10.0.0.1:37602). Mar 14 00:17:20.673973 systemd-logind[1465]: Removed session 16. Mar 14 00:17:20.752200 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 37602 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:20.758433 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:20.782495 systemd-logind[1465]: New session 17 of user core. Mar 14 00:17:20.804705 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:17:21.215941 sshd[3997]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:21.305684 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:37602.service: Deactivated successfully. Mar 14 00:17:21.312334 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:17:21.322437 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:17:21.392612 systemd-logind[1465]: Removed session 17. Mar 14 00:17:26.259916 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:37604.service - OpenSSH per-connection server daemon (10.0.0.1:37604). Mar 14 00:17:26.306209 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 37604 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:26.309935 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:26.320728 systemd-logind[1465]: New session 18 of user core. Mar 14 00:17:26.331071 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:17:26.528613 sshd[4034]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:26.540084 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:37604.service: Deactivated successfully. Mar 14 00:17:26.542910 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:17:26.545825 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:17:26.548086 systemd-logind[1465]: Removed session 18. Mar 14 00:17:31.607109 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:41246.service - OpenSSH per-connection server daemon (10.0.0.1:41246). Mar 14 00:17:31.972358 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 41246 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:31.977427 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:32.031646 systemd-logind[1465]: New session 19 of user core. Mar 14 00:17:32.094373 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:17:32.590978 sshd[4069]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:32.608658 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:41246.service: Deactivated successfully. Mar 14 00:17:32.616405 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:17:32.620606 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:17:32.629896 systemd-logind[1465]: Removed session 19. Mar 14 00:17:37.625679 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:41250.service - OpenSSH per-connection server daemon (10.0.0.1:41250). Mar 14 00:17:37.675435 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 41250 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:37.679646 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:37.692242 systemd-logind[1465]: New session 20 of user core. Mar 14 00:17:37.698503 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:17:37.938713 sshd[4112]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:37.945403 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:41250.service: Deactivated successfully. Mar 14 00:17:37.950593 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:17:37.959293 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:17:37.963999 systemd-logind[1465]: Removed session 20. Mar 14 00:17:42.969403 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:44104.service - OpenSSH per-connection server daemon (10.0.0.1:44104). Mar 14 00:17:43.031671 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 44104 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:43.041604 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:43.081061 systemd-logind[1465]: New session 21 of user core. Mar 14 00:17:43.089506 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:17:43.421573 sshd[4153]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:43.449095 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:44104.service: Deactivated successfully. Mar 14 00:17:43.458890 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:17:43.466437 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:17:43.471903 systemd-logind[1465]: Removed session 21. Mar 14 00:17:44.470055 kubelet[2619]: E0314 00:17:44.468956 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:44.470055 kubelet[2619]: E0314 00:17:44.469627 2619 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:48.467098 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:44106.service - OpenSSH per-connection server daemon (10.0.0.1:44106). Mar 14 00:17:48.532739 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 44106 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:17:48.553111 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:17:48.571826 systemd-logind[1465]: New session 22 of user core. Mar 14 00:17:48.581550 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:17:48.810868 sshd[4201]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:48.821205 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:44106.service: Deactivated successfully. Mar 14 00:17:48.824755 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:17:48.826632 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:17:48.829489 systemd-logind[1465]: Removed session 22.