Apr 14 00:53:15.341916 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 00:53:15.341944 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:53:15.341959 kernel: BIOS-provided physical RAM map: Apr 14 00:53:15.341968 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 00:53:15.341975 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 00:53:15.341983 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 00:53:15.341991 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 00:53:15.341998 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 00:53:15.342005 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 00:53:15.342014 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 00:53:15.342022 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 00:53:15.342310 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 00:53:15.342322 kernel: NX (Execute Disable) protection: active Apr 14 00:53:15.342329 kernel: APIC: Static calls initialized Apr 14 00:53:15.342338 kernel: SMBIOS 2.8 present. Apr 14 00:53:15.342349 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 00:53:15.342358 kernel: Hypervisor detected: KVM Apr 14 00:53:15.342366 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 00:53:15.342375 kernel: kvm-clock: using sched offset of 6380287305 cycles Apr 14 00:53:15.342385 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 00:53:15.342394 kernel: tsc: Detected 2793.438 MHz processor Apr 14 00:53:15.342403 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 00:53:15.342413 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 00:53:15.342421 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 00:53:15.342432 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 00:53:15.342441 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 00:53:15.342449 kernel: Using GB pages for direct mapping Apr 14 00:53:15.342458 kernel: ACPI: Early table checksum verification disabled Apr 14 00:53:15.342467 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 00:53:15.342476 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342484 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342493 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342501 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 00:53:15.342627 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342650 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342658 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342664 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:53:15.342669 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 00:53:15.342676 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 00:53:15.342681 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 00:53:15.342764 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 00:53:15.342773 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 00:53:15.342778 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 00:53:15.342783 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 00:53:15.342788 kernel: No NUMA configuration found Apr 14 00:53:15.342793 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 00:53:15.342799 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 00:53:15.342807 kernel: Zone ranges: Apr 14 00:53:15.342812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 00:53:15.342817 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 00:53:15.342822 kernel: Normal empty Apr 14 00:53:15.342828 kernel: Movable zone start for each node Apr 14 00:53:15.342833 kernel: Early memory node ranges Apr 14 00:53:15.342838 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 00:53:15.342843 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 00:53:15.342847 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 00:53:15.342852 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:53:15.342859 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 00:53:15.342864 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 00:53:15.342874 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 00:53:15.342880 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 00:53:15.342885 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 00:53:15.342890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 00:53:15.342895 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 00:53:15.342900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 00:53:15.342905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 00:53:15.342912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 00:53:15.342917 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 00:53:15.342921 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 00:53:15.342928 kernel: TSC deadline timer available Apr 14 00:53:15.342933 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 00:53:15.342938 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 00:53:15.342943 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 00:53:15.342948 kernel: kvm-guest: setup PV sched yield Apr 14 00:53:15.342953 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 00:53:15.342959 kernel: Booting paravirtualized kernel on KVM Apr 14 00:53:15.342964 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 00:53:15.342969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 00:53:15.342977 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 00:53:15.342986 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 00:53:15.342996 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 00:53:15.343092 kernel: kvm-guest: PV spinlocks enabled Apr 14 00:53:15.343101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 00:53:15.343110 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:53:15.343127 kernel: random: crng init done Apr 14 00:53:15.343136 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 00:53:15.343142 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 00:53:15.343147 kernel: Fallback order for Node 0: 0 Apr 14 00:53:15.343152 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 00:53:15.343158 kernel: Policy zone: DMA32 Apr 14 00:53:15.343162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 00:53:15.343168 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 00:53:15.343174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 00:53:15.343179 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 00:53:15.343184 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 00:53:15.343189 kernel: Dynamic Preempt: voluntary Apr 14 00:53:15.343194 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 00:53:15.343202 kernel: rcu: RCU event tracing is enabled. Apr 14 00:53:15.343208 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 00:53:15.343213 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 00:53:15.343218 kernel: Rude variant of Tasks RCU enabled. Apr 14 00:53:15.343224 kernel: Tracing variant of Tasks RCU enabled. Apr 14 00:53:15.343229 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 00:53:15.343235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 00:53:15.343240 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 00:53:15.343244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 00:53:15.343249 kernel: Console: colour VGA+ 80x25 Apr 14 00:53:15.343254 kernel: printk: console [ttyS0] enabled Apr 14 00:53:15.343259 kernel: ACPI: Core revision 20230628 Apr 14 00:53:15.343264 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 00:53:15.343275 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 00:53:15.343283 kernel: x2apic enabled Apr 14 00:53:15.343290 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 00:53:15.343298 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 00:53:15.343306 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 00:53:15.343314 kernel: kvm-guest: setup PV IPIs Apr 14 00:53:15.343321 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 00:53:15.343330 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:53:15.343347 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 00:53:15.343357 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 00:53:15.343366 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 00:53:15.343375 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 00:53:15.343385 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 00:53:15.343394 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 00:53:15.343402 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 00:53:15.343774 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 00:53:15.343795 kernel: RETBleed: Vulnerable Apr 14 00:53:15.343804 kernel: Speculative Store Bypass: Vulnerable Apr 14 00:53:15.343814 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 00:53:15.343823 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 00:53:15.343832 kernel: active return thunk: its_return_thunk Apr 14 00:53:15.343841 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 00:53:15.343850 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 00:53:15.343858 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 00:53:15.343866 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 00:53:15.343877 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 00:53:15.343886 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 00:53:15.344437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 00:53:15.344448 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 00:53:15.344456 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 00:53:15.344464 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 00:53:15.344473 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 00:53:15.344483 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 00:53:15.344494 kernel: Freeing SMP alternatives memory: 32K Apr 14 00:53:15.344541 kernel: pid_max: default: 32768 minimum: 301 Apr 14 00:53:15.344548 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 00:53:15.344554 kernel: landlock: Up and running. Apr 14 00:53:15.344560 kernel: SELinux: Initializing. Apr 14 00:53:15.344566 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:53:15.344572 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:53:15.344578 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 00:53:15.344584 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:53:15.344589 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:53:15.344597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:53:15.344603 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 00:53:15.344609 kernel: signal: max sigframe size: 3632 Apr 14 00:53:15.344614 kernel: rcu: Hierarchical SRCU implementation. Apr 14 00:53:15.344620 kernel: rcu: Max phase no-delay instances is 400. Apr 14 00:53:15.344626 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 00:53:15.344632 kernel: smp: Bringing up secondary CPUs ... Apr 14 00:53:15.344637 kernel: smpboot: x86: Booting SMP configuration: Apr 14 00:53:15.344643 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 00:53:15.344652 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 00:53:15.344658 kernel: smpboot: Max logical packages: 1 Apr 14 00:53:15.344664 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 00:53:15.344669 kernel: devtmpfs: initialized Apr 14 00:53:15.344675 kernel: x86/mm: Memory block size: 128MB Apr 14 00:53:15.344681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 00:53:15.344686 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 00:53:15.344692 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 00:53:15.344697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 00:53:15.344704 kernel: audit: initializing netlink subsys (disabled) Apr 14 00:53:15.344710 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 00:53:15.344716 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 00:53:15.344721 kernel: audit: type=2000 audit(1776127991.916:1): state=initialized audit_enabled=0 res=1 Apr 14 00:53:15.344727 kernel: cpuidle: using governor menu Apr 14 00:53:15.344732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 00:53:15.344738 kernel: dca service started, version 1.12.1 Apr 14 00:53:15.344744 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 00:53:15.344750 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 00:53:15.344757 kernel: PCI: Using configuration type 1 for base access Apr 14 00:53:15.344763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 00:53:15.344768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 00:53:15.344774 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 00:53:15.344779 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 00:53:15.344785 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 00:53:15.344790 kernel: ACPI: Added _OSI(Module Device) Apr 14 00:53:15.344796 kernel: ACPI: Added _OSI(Processor Device) Apr 14 00:53:15.344801 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 00:53:15.344808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 00:53:15.344814 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 00:53:15.344819 kernel: ACPI: Interpreter enabled Apr 14 00:53:15.344825 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 00:53:15.344830 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 00:53:15.344836 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 00:53:15.344841 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 00:53:15.344847 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 00:53:15.344853 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 00:53:15.345074 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 00:53:15.345175 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 00:53:15.345264 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 00:53:15.345276 kernel: PCI host bridge to bus 0000:00 Apr 14 00:53:15.346116 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 00:53:15.346476 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 00:53:15.346608 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 00:53:15.346684 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 00:53:15.346757 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 00:53:15.346836 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 00:53:15.346917 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 00:53:15.347367 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 00:53:15.347476 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 00:53:15.347603 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 00:53:15.347688 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 00:53:15.347771 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 00:53:15.347854 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 00:53:15.347948 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 00:53:15.348024 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 00:53:15.348146 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 00:53:15.348206 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 00:53:15.348270 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 00:53:15.348344 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 00:53:15.348419 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 00:53:15.348499 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 00:53:15.349082 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 00:53:15.350250 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 00:53:15.350324 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 00:53:15.350395 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 00:53:15.350541 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 00:53:15.350629 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 00:53:15.350701 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 00:53:15.350793 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 00:53:15.350860 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 00:53:15.350916 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 00:53:15.350976 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 00:53:15.351244 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 00:53:15.351256 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 00:53:15.351262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 00:53:15.351267 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 00:53:15.351273 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 00:53:15.351282 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 00:53:15.351287 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 00:53:15.351293 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 00:53:15.351298 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 00:53:15.351304 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 00:53:15.351309 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 00:53:15.351315 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 00:53:15.351320 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 00:53:15.351326 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 00:53:15.351333 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 00:53:15.351338 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 00:53:15.351344 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 00:53:15.351350 kernel: iommu: Default domain type: Translated Apr 14 00:53:15.351355 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 00:53:15.351361 kernel: PCI: Using ACPI for IRQ routing Apr 14 00:53:15.351370 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 00:53:15.351379 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 00:53:15.351388 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 00:53:15.351477 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 00:53:15.351599 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 00:53:15.351687 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 00:53:15.351698 kernel: vgaarb: loaded Apr 14 00:53:15.351708 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 00:53:15.351716 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 00:53:15.351726 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 00:53:15.351736 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 00:53:15.351750 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 00:53:15.351761 kernel: pnp: PnP ACPI init Apr 14 00:53:15.351957 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 00:53:15.351973 kernel: pnp: PnP ACPI: found 6 devices Apr 14 00:53:15.351983 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 00:53:15.351994 kernel: NET: Registered PF_INET protocol family Apr 14 00:53:15.352005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 00:53:15.352015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 00:53:15.352254 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 00:53:15.352263 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 00:53:15.352269 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 00:53:15.352275 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 00:53:15.352281 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:53:15.352286 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:53:15.352292 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 00:53:15.352297 kernel: NET: Registered PF_XDP protocol family Apr 14 00:53:15.352728 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 00:53:15.352793 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 00:53:15.352842 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 00:53:15.352892 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 00:53:15.352940 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 00:53:15.352989 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 00:53:15.352996 kernel: PCI: CLS 0 bytes, default 64 Apr 14 00:53:15.353002 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 00:53:15.353007 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:53:15.353017 kernel: Initialise system trusted keyrings Apr 14 00:53:15.353023 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 00:53:15.353071 kernel: Key type asymmetric registered Apr 14 00:53:15.353077 kernel: Asymmetric key parser 'x509' registered Apr 14 00:53:15.353082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 00:53:15.353088 kernel: io scheduler mq-deadline registered Apr 14 00:53:15.353093 kernel: io scheduler kyber registered Apr 14 00:53:15.353099 kernel: io scheduler bfq registered Apr 14 00:53:15.353104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 00:53:15.353112 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 00:53:15.353118 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 00:53:15.353124 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 00:53:15.353129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 00:53:15.353135 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 00:53:15.353141 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 00:53:15.353146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 00:53:15.353152 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 00:53:15.353217 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 00:53:15.353272 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 00:53:15.353324 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T00:53:14 UTC (1776127994) Apr 14 00:53:15.353332 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 14 00:53:15.353380 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 00:53:15.353387 kernel: intel_pstate: CPU model not supported Apr 14 00:53:15.353393 kernel: NET: Registered PF_INET6 protocol family Apr 14 00:53:15.353398 kernel: Segment Routing with IPv6 Apr 14 00:53:15.353404 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 00:53:15.353411 kernel: NET: Registered PF_PACKET protocol family Apr 14 00:53:15.353417 kernel: Key type dns_resolver registered Apr 14 00:53:15.353422 kernel: IPI shorthand broadcast: enabled Apr 14 00:53:15.353428 kernel: sched_clock: Marking stable (2149016368, 506386596)->(3049714080, -394311116) Apr 14 00:53:15.353433 kernel: registered taskstats version 1 Apr 14 00:53:15.353439 kernel: Loading compiled-in X.509 certificates Apr 14 00:53:15.353445 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 00:53:15.353451 kernel: Key type .fscrypt registered Apr 14 00:53:15.353456 kernel: Key type fscrypt-provisioning registered Apr 14 00:53:15.353463 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 00:53:15.353468 kernel: ima: Allocated hash algorithm: sha1 Apr 14 00:53:15.353474 kernel: ima: No architecture policies found Apr 14 00:53:15.353479 kernel: clk: Disabling unused clocks Apr 14 00:53:15.353484 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 00:53:15.353490 kernel: Write protecting the kernel read-only data: 36864k Apr 14 00:53:15.353496 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 00:53:15.353501 kernel: Run /init as init process Apr 14 00:53:15.353507 kernel: with arguments: Apr 14 00:53:15.353538 kernel: /init Apr 14 00:53:15.353545 kernel: with environment: Apr 14 00:53:15.353551 kernel: HOME=/ Apr 14 00:53:15.353556 kernel: TERM=linux Apr 14 00:53:15.353564 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:53:15.353589 systemd[1]: Detected virtualization kvm. Apr 14 00:53:15.353595 systemd[1]: Detected architecture x86-64. Apr 14 00:53:15.353615 systemd[1]: Running in initrd. Apr 14 00:53:15.353637 systemd[1]: No hostname configured, using default hostname. Apr 14 00:53:15.353643 systemd[1]: Hostname set to . Apr 14 00:53:15.353649 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:53:15.353655 systemd[1]: Queued start job for default target initrd.target. Apr 14 00:53:15.353661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:53:15.353667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:53:15.353673 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 00:53:15.353679 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:53:15.353687 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 00:53:15.353694 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 00:53:15.353710 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 00:53:15.353717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 00:53:15.353723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:53:15.353730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:53:15.353736 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:53:15.353757 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:53:15.353764 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:53:15.353770 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:53:15.353777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:53:15.353786 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:53:15.353796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:53:15.353808 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:53:15.353816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:53:15.353825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:53:15.353836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:53:15.353844 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:53:15.353854 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 00:53:15.353863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:53:15.353874 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 00:53:15.353882 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 00:53:15.353896 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:53:15.353903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:53:15.353909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:53:15.353915 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 00:53:15.353943 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 00:53:15.353961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:53:15.353968 systemd-journald[194]: Journal started Apr 14 00:53:15.353987 systemd-journald[194]: Runtime Journal (/run/log/journal/81b7e39b654c4ac9b031c4aee4610ada) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:53:15.350612 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 00:53:15.359270 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 00:53:15.364171 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:53:15.388317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 00:53:15.389736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:53:15.626567 kernel: Bridge firewalling registered Apr 14 00:53:15.391169 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 00:53:15.636656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:53:15.640545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:53:15.646876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:53:15.651781 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:53:15.657151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:53:15.678766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:53:15.684841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:53:15.691297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:53:15.704625 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:53:15.707879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:53:15.728853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:53:15.734850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:53:15.740758 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 00:53:15.758635 dracut-cmdline[229]: dracut-dracut-053 Apr 14 00:53:15.761604 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:53:15.781553 systemd-resolved[227]: Positive Trust Anchors: Apr 14 00:53:15.781582 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:53:15.781616 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:53:15.785569 systemd-resolved[227]: Defaulting to hostname 'linux'. Apr 14 00:53:15.786716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:53:15.788186 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:53:15.896478 kernel: SCSI subsystem initialized Apr 14 00:53:15.912426 kernel: Loading iSCSI transport class v2.0-870. Apr 14 00:53:15.973380 kernel: iscsi: registered transport (tcp) Apr 14 00:53:16.010893 kernel: iscsi: registered transport (qla4xxx) Apr 14 00:53:16.011864 kernel: QLogic iSCSI HBA Driver Apr 14 00:53:16.069858 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 00:53:16.086483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 00:53:16.128904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 00:53:16.129281 kernel: device-mapper: uevent: version 1.0.3 Apr 14 00:53:16.129291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 00:53:16.184426 kernel: raid6: avx512x4 gen() 38841 MB/s Apr 14 00:53:16.202464 kernel: raid6: avx512x2 gen() 34562 MB/s Apr 14 00:53:16.220287 kernel: raid6: avx512x1 gen() 34224 MB/s Apr 14 00:53:16.238310 kernel: raid6: avx2x4 gen() 32575 MB/s Apr 14 00:53:16.256227 kernel: raid6: avx2x2 gen() 32639 MB/s Apr 14 00:53:16.274763 kernel: raid6: avx2x1 gen() 24946 MB/s Apr 14 00:53:16.274992 kernel: raid6: using algorithm avx512x4 gen() 38841 MB/s Apr 14 00:53:16.293897 kernel: raid6: .... xor() 9809 MB/s, rmw enabled Apr 14 00:53:16.294192 kernel: raid6: using avx512x2 recovery algorithm Apr 14 00:53:16.317426 kernel: xor: automatically using best checksumming function avx Apr 14 00:53:16.555246 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 00:53:16.566327 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:53:16.580117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:53:16.594850 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 14 00:53:16.600082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:53:16.614624 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 00:53:16.626577 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 14 00:53:16.662113 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:53:16.680660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:53:16.719639 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:53:16.731288 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 00:53:16.741750 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 00:53:16.747808 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:53:16.750767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:53:16.753389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:53:16.767154 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 00:53:16.767456 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 00:53:16.779143 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 00:53:16.782771 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:53:16.800866 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 00:53:16.801309 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 00:53:16.803141 kernel: AES CTR mode by8 optimization enabled Apr 14 00:53:16.804290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 00:53:16.804408 kernel: libata version 3.00 loaded. Apr 14 00:53:16.806898 kernel: GPT:9289727 != 19775487 Apr 14 00:53:16.806973 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 00:53:16.809591 kernel: GPT:9289727 != 19775487 Apr 14 00:53:16.809803 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 00:53:16.811503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:53:16.814304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:53:16.814413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:53:16.824409 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:53:16.835085 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 00:53:16.835242 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 00:53:16.835101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:53:16.835264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:53:16.844077 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (460) Apr 14 00:53:16.844144 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 00:53:16.844277 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 00:53:16.844350 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Apr 14 00:53:16.847229 kernel: scsi host0: ahci Apr 14 00:53:16.849369 kernel: scsi host1: ahci Apr 14 00:53:16.852068 kernel: scsi host2: ahci Apr 14 00:53:16.853514 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:53:16.855907 kernel: scsi host3: ahci Apr 14 00:53:16.861171 kernel: scsi host4: ahci Apr 14 00:53:16.861449 kernel: scsi host5: ahci Apr 14 00:53:16.861631 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 00:53:16.865708 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 00:53:16.868190 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 00:53:16.870352 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 00:53:16.870612 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 00:53:16.874147 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 00:53:16.876700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:53:16.889972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 00:53:16.899202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 00:53:16.916453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 00:53:17.105718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 00:53:17.106280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:53:17.116839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:53:17.137400 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 00:53:17.149404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:53:17.140712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:53:17.154255 disk-uuid[563]: Primary Header is updated. Apr 14 00:53:17.154255 disk-uuid[563]: Secondary Entries is updated. Apr 14 00:53:17.154255 disk-uuid[563]: Secondary Header is updated. Apr 14 00:53:17.161126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:53:17.162595 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:53:17.173306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:53:17.205861 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 00:53:17.205908 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 00:53:17.210131 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 00:53:17.210215 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 00:53:17.213209 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 00:53:17.215207 kernel: ata3.00: applying bridge limits Apr 14 00:53:17.217263 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 00:53:17.217603 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 00:53:17.224211 kernel: ata3.00: configured for UDMA/100 Apr 14 00:53:17.233089 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 00:53:17.291130 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 00:53:17.291749 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 00:53:17.306114 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 00:53:18.164918 disk-uuid[565]: The operation has completed successfully. Apr 14 00:53:18.167648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:53:18.189502 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 00:53:18.189847 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 00:53:18.215560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 00:53:18.227613 sh[601]: Success Apr 14 00:53:18.247128 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 00:53:18.288824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 00:53:18.299963 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 00:53:18.308270 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 00:53:18.326889 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 00:53:18.327412 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:53:18.327423 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 00:53:18.331739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 00:53:18.336236 kernel: BTRFS info (device dm-0): using free space tree Apr 14 00:53:18.360502 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 00:53:18.361129 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 00:53:18.373245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 00:53:18.385714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 00:53:18.408780 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:53:18.408847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:53:18.408874 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:53:18.417658 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:53:18.448732 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 00:53:18.462291 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:53:18.476481 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 00:53:18.484994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 00:53:18.566702 ignition[713]: Ignition 2.19.0 Apr 14 00:53:18.566714 ignition[713]: Stage: fetch-offline Apr 14 00:53:18.566746 ignition[713]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:18.566753 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:18.566844 ignition[713]: parsed url from cmdline: "" Apr 14 00:53:18.566846 ignition[713]: no config URL provided Apr 14 00:53:18.566852 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 00:53:18.566857 ignition[713]: no config at "/usr/lib/ignition/user.ign" Apr 14 00:53:18.566895 ignition[713]: op(1): [started] loading QEMU firmware config module Apr 14 00:53:18.566898 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 00:53:18.577369 ignition[713]: op(1): [finished] loading QEMU firmware config module Apr 14 00:53:18.624243 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:53:18.681304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:53:18.688979 ignition[713]: parsing config with SHA512: 4f1df7eeda01e7db1c8432b182a534515faf7717281fd4bb97982c2e63f1d680fdab017401ad238a225e8bd588e7df67c7d6699dc04872f673f44093c6bd8519 Apr 14 00:53:18.693252 unknown[713]: fetched base config from "system" Apr 14 00:53:18.693266 unknown[713]: fetched user config from "qemu" Apr 14 00:53:18.693864 ignition[713]: fetch-offline: fetch-offline passed Apr 14 00:53:18.697882 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:53:18.693956 ignition[713]: Ignition finished successfully Apr 14 00:53:18.716862 systemd-networkd[789]: lo: Link UP Apr 14 00:53:18.716888 systemd-networkd[789]: lo: Gained carrier Apr 14 00:53:18.717775 systemd-networkd[789]: Enumeration completed Apr 14 00:53:18.717996 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:53:18.718444 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:53:18.718446 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:53:18.719576 systemd-networkd[789]: eth0: Link UP Apr 14 00:53:18.719578 systemd-networkd[789]: eth0: Gained carrier Apr 14 00:53:18.719584 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:53:18.722712 systemd[1]: Reached target network.target - Network. Apr 14 00:53:18.725674 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 00:53:18.740375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 00:53:18.764159 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:53:18.767340 ignition[792]: Ignition 2.19.0 Apr 14 00:53:18.767802 ignition[792]: Stage: kargs Apr 14 00:53:18.768139 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:18.768152 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:18.776439 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 00:53:18.769220 ignition[792]: kargs: kargs passed Apr 14 00:53:18.769265 ignition[792]: Ignition finished successfully Apr 14 00:53:18.792791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 00:53:18.814241 ignition[801]: Ignition 2.19.0 Apr 14 00:53:18.814274 ignition[801]: Stage: disks Apr 14 00:53:18.814578 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:18.814595 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:18.817488 ignition[801]: disks: disks passed Apr 14 00:53:18.825471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 00:53:18.817580 ignition[801]: Ignition finished successfully Apr 14 00:53:18.828798 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 00:53:18.839163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:53:18.842586 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:53:18.851385 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:53:18.853697 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:53:18.877838 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 00:53:18.900920 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 00:53:18.906003 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 00:53:18.920716 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 00:53:19.088129 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 00:53:19.089778 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 00:53:19.095608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 00:53:19.124936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:53:19.127704 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 00:53:19.131763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 00:53:19.131813 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 00:53:19.131836 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:53:19.157189 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Apr 14 00:53:19.157345 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:53:19.162618 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:53:19.162708 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:53:19.166754 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 00:53:19.177206 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:53:19.181726 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 00:53:19.188295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:53:19.231389 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 00:53:19.239895 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 14 00:53:19.247681 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 00:53:19.253941 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 00:53:19.399209 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 00:53:19.419408 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 00:53:19.426403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 00:53:19.486136 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 00:53:19.489883 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:53:19.525353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 00:53:19.536517 ignition[932]: INFO : Ignition 2.19.0 Apr 14 00:53:19.536517 ignition[932]: INFO : Stage: mount Apr 14 00:53:19.536517 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:19.536517 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:19.545857 ignition[932]: INFO : mount: mount passed Apr 14 00:53:19.545857 ignition[932]: INFO : Ignition finished successfully Apr 14 00:53:19.543634 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 00:53:19.554628 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 00:53:19.571241 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:53:19.589353 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 14 00:53:19.597272 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:53:19.597574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:53:19.597590 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:53:19.609217 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:53:19.613584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:53:19.653444 ignition[963]: INFO : Ignition 2.19.0 Apr 14 00:53:19.653444 ignition[963]: INFO : Stage: files Apr 14 00:53:19.653444 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:19.653444 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:19.670163 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 14 00:53:19.670163 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 00:53:19.670163 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 00:53:19.670163 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 00:53:19.670163 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 00:53:19.670163 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 00:53:19.670163 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:53:19.670163 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 00:53:19.666881 unknown[963]: wrote ssh authorized keys file for user: core Apr 14 00:53:19.863646 systemd-networkd[789]: eth0: Gained IPv6LL Apr 14 00:53:20.722941 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:53:20.838343 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 00:53:20.897782 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 14 00:53:21.159516 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 00:53:21.940740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 00:53:21.940740 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 00:53:21.952974 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 00:53:22.014674 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:53:22.027512 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:53:22.088909 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 00:53:22.088909 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 00:53:22.088909 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 00:53:22.088909 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:53:22.115779 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:53:22.115779 ignition[963]: INFO : files: files passed Apr 14 00:53:22.115779 ignition[963]: INFO : Ignition finished successfully Apr 14 00:53:22.121942 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 00:53:22.148970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 00:53:22.154741 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 00:53:22.158090 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 00:53:22.158208 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 00:53:22.170680 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 00:53:22.179108 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:53:22.188978 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:53:22.182119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:53:22.200385 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:53:22.190770 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 00:53:22.212262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 00:53:22.243621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 00:53:22.243877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 00:53:22.249758 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 00:53:22.256335 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 00:53:22.256786 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 00:53:22.275666 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 00:53:22.290839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:53:22.307074 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 00:53:22.331908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:53:22.332163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:53:22.341461 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 00:53:22.350282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 00:53:22.350509 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:53:22.358729 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 00:53:22.358897 systemd[1]: Stopped target basic.target - Basic System. Apr 14 00:53:22.364938 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 00:53:22.369537 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:53:22.375380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 00:53:22.380418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 00:53:22.384886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:53:22.389646 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 00:53:22.394715 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 00:53:22.400854 systemd[1]: Stopped target swap.target - Swaps. Apr 14 00:53:22.402824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 00:53:22.402952 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:53:22.411139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:53:22.417534 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:53:22.423785 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 00:53:22.426720 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:53:22.432593 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 00:53:22.432723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 00:53:22.442306 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 00:53:22.442784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:53:22.450709 systemd[1]: Stopped target paths.target - Path Units. Apr 14 00:53:22.454796 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 00:53:22.457347 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:53:22.457982 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 00:53:22.464793 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 00:53:22.470105 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 00:53:22.470212 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:53:22.479723 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 00:53:22.479842 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:53:22.484410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 00:53:22.484585 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:53:22.487739 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 00:53:22.487893 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 00:53:22.522604 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 00:53:22.527827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 00:53:22.528604 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:53:22.534988 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 00:53:22.536824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 00:53:22.536968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:53:22.543970 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 00:53:22.551980 ignition[1017]: INFO : Ignition 2.19.0 Apr 14 00:53:22.551980 ignition[1017]: INFO : Stage: umount Apr 14 00:53:22.551980 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:53:22.551980 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:53:22.551980 ignition[1017]: INFO : umount: umount passed Apr 14 00:53:22.551980 ignition[1017]: INFO : Ignition finished successfully Apr 14 00:53:22.544149 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:53:22.555531 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 00:53:22.555881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 00:53:22.564664 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 00:53:22.564766 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 00:53:22.566843 systemd[1]: Stopped target network.target - Network. Apr 14 00:53:22.568621 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 00:53:22.568704 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 00:53:22.569024 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 00:53:22.569101 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 00:53:22.570999 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 00:53:22.571087 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 00:53:22.573312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 00:53:22.573352 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 00:53:22.575180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 00:53:22.577164 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 00:53:22.631433 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 00:53:22.631857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 00:53:22.644777 systemd-networkd[789]: eth0: DHCPv6 lease lost Apr 14 00:53:22.647894 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 00:53:22.648313 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 00:53:22.652276 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 00:53:22.652316 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:53:22.655400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 00:53:22.669710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 00:53:22.669824 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:53:22.677622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:53:22.677666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:53:22.683527 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 00:53:22.683611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 00:53:22.687785 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 00:53:22.687841 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:53:22.689889 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:53:22.702715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 00:53:22.718655 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 00:53:22.718822 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:53:22.722947 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 00:53:22.723003 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 00:53:22.727335 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 00:53:22.727370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:53:22.780445 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 00:53:22.780526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:53:22.791646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 00:53:22.791932 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 00:53:22.803913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:53:22.804325 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:53:22.825779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 00:53:22.825880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 00:53:22.825932 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:53:22.842617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:53:22.842714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:53:22.848715 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 00:53:22.848810 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 00:53:22.856285 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 00:53:22.856783 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 00:53:22.860186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 00:53:22.860256 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 00:53:22.865633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 00:53:22.868468 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 00:53:22.868529 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 00:53:22.891708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 00:53:22.901753 systemd[1]: Switching root. Apr 14 00:53:22.942867 systemd-journald[194]: Journal stopped Apr 14 00:53:24.440998 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 00:53:24.441248 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 00:53:24.441267 kernel: SELinux: policy capability open_perms=1 Apr 14 00:53:24.441278 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 00:53:24.441288 kernel: SELinux: policy capability always_check_network=0 Apr 14 00:53:24.441295 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 00:53:24.441302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 00:53:24.441310 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 00:53:24.441318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 00:53:24.441326 kernel: audit: type=1403 audit(1776128003.090:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 00:53:24.441336 systemd[1]: Successfully loaded SELinux policy in 44.698ms. Apr 14 00:53:24.441353 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.373ms. Apr 14 00:53:24.441362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:53:24.441374 systemd[1]: Detected virtualization kvm. Apr 14 00:53:24.441382 systemd[1]: Detected architecture x86-64. Apr 14 00:53:24.441393 systemd[1]: Detected first boot. Apr 14 00:53:24.441401 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:53:24.441409 zram_generator::config[1065]: No configuration found. Apr 14 00:53:24.441418 systemd[1]: Populated /etc with preset unit settings. Apr 14 00:53:24.441428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 00:53:24.441443 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 00:53:24.441455 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 00:53:24.441472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 00:53:24.441485 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 00:53:24.441497 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 00:53:24.441510 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 00:53:24.441523 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 00:53:24.441536 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 00:53:24.441616 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 00:53:24.441633 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 00:53:24.441647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:53:24.441659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:53:24.441672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 00:53:24.441686 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 00:53:24.441700 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 00:53:24.441713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:53:24.441723 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 00:53:24.441740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:53:24.441753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 00:53:24.441766 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 00:53:24.441778 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 00:53:24.441797 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 00:53:24.441810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:53:24.441823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:53:24.441837 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:53:24.441852 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:53:24.441866 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 00:53:24.441880 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 00:53:24.441893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:53:24.441905 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:53:24.441917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:53:24.441930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 00:53:24.441943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 00:53:24.441957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 00:53:24.441972 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 00:53:24.441988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:53:24.442001 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 00:53:24.442014 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 00:53:24.442083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 00:53:24.442094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 00:53:24.442102 systemd[1]: Reached target machines.target - Containers. Apr 14 00:53:24.442110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 00:53:24.442120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:53:24.442128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:53:24.442138 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 00:53:24.442146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:53:24.442153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:53:24.442161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:53:24.442169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 00:53:24.442177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:53:24.442185 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 00:53:24.442195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 00:53:24.442203 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 00:53:24.442211 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 00:53:24.442219 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 00:53:24.442227 kernel: fuse: init (API version 7.39) Apr 14 00:53:24.442235 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:53:24.442242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:53:24.442250 kernel: ACPI: bus type drm_connector registered Apr 14 00:53:24.442257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 00:53:24.442267 kernel: loop: module loaded Apr 14 00:53:24.442274 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 00:53:24.442299 systemd-journald[1149]: Collecting audit messages is disabled. Apr 14 00:53:24.442317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:53:24.442327 systemd-journald[1149]: Journal started Apr 14 00:53:24.442353 systemd-journald[1149]: Runtime Journal (/run/log/journal/81b7e39b654c4ac9b031c4aee4610ada) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:53:23.852902 systemd[1]: Queued start job for default target multi-user.target. Apr 14 00:53:23.877895 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 00:53:23.879285 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 00:53:24.454925 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 00:53:24.455327 systemd[1]: Stopped verity-setup.service. Apr 14 00:53:24.464268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:53:24.469277 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:53:24.473746 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 00:53:24.477842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 00:53:24.483242 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 00:53:24.486841 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 00:53:24.491480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 00:53:24.496450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 00:53:24.503019 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 00:53:24.508848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:53:24.514757 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 00:53:24.515471 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 00:53:24.522835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:53:24.523782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:53:24.527803 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:53:24.528693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:53:24.533788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:53:24.534785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:53:24.538632 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 00:53:24.538779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 00:53:24.542524 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:53:24.544171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:53:24.548775 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:53:24.552843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 00:53:24.556957 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 00:53:24.576773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:53:24.584872 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 00:53:24.603890 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 00:53:24.609111 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 00:53:24.612628 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 00:53:24.612687 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:53:24.616786 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 00:53:24.622612 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 00:53:24.684710 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 00:53:24.688969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:53:24.694780 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 00:53:24.701977 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 00:53:24.705161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:53:24.708967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 00:53:24.712467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:53:24.715920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:53:24.722749 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 00:53:24.730658 systemd-journald[1149]: Time spent on flushing to /var/log/journal/81b7e39b654c4ac9b031c4aee4610ada is 21.265ms for 951 entries. Apr 14 00:53:24.730658 systemd-journald[1149]: System Journal (/var/log/journal/81b7e39b654c4ac9b031c4aee4610ada) is 8.0M, max 195.6M, 187.6M free. Apr 14 00:53:24.796990 systemd-journald[1149]: Received client request to flush runtime journal. Apr 14 00:53:24.797295 kernel: loop0: detected capacity change from 0 to 140768 Apr 14 00:53:24.732938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 00:53:24.740498 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 00:53:24.758512 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 00:53:24.762858 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 00:53:24.766865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 00:53:24.772890 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 00:53:24.791428 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 00:53:24.810761 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 00:53:24.819918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 00:53:24.825156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:53:24.831927 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 00:53:24.846241 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 00:53:24.853578 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 00:53:24.860821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:53:24.867733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 00:53:24.869941 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 00:53:24.887313 kernel: loop1: detected capacity change from 0 to 217752 Apr 14 00:53:24.916966 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 14 00:53:24.916988 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 14 00:53:24.925847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:53:25.011270 kernel: loop2: detected capacity change from 0 to 142488 Apr 14 00:53:25.084169 kernel: loop3: detected capacity change from 0 to 140768 Apr 14 00:53:25.107369 kernel: loop4: detected capacity change from 0 to 217752 Apr 14 00:53:25.137189 kernel: loop5: detected capacity change from 0 to 142488 Apr 14 00:53:25.167993 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 00:53:25.168631 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 14 00:53:25.177637 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 00:53:25.177670 systemd[1]: Reloading... Apr 14 00:53:25.331995 zram_generator::config[1235]: No configuration found. Apr 14 00:53:25.411886 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 00:53:25.473511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:53:25.548914 systemd[1]: Reloading finished in 370 ms. Apr 14 00:53:25.601780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 00:53:25.607743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 00:53:25.613409 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 00:53:25.637325 systemd[1]: Starting ensure-sysext.service... Apr 14 00:53:25.640810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:53:25.645989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:53:25.651722 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Apr 14 00:53:25.651764 systemd[1]: Reloading... Apr 14 00:53:25.665158 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:53:25.665492 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:53:25.666403 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:53:25.666724 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 14 00:53:25.666803 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 14 00:53:25.669545 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:53:25.669609 systemd-tmpfiles[1268]: Skipping /boot Apr 14 00:53:25.671121 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Apr 14 00:53:25.680333 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:53:25.680343 systemd-tmpfiles[1268]: Skipping /boot Apr 14 00:53:25.786280 zram_generator::config[1305]: No configuration found. Apr 14 00:53:25.834147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1294) Apr 14 00:53:25.881125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 00:53:25.889274 kernel: ACPI: button: Power Button [PWRF] Apr 14 00:53:25.912415 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 00:53:25.912730 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 00:53:25.912877 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 00:53:25.958110 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 00:53:25.952127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:53:25.991076 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 00:53:26.094897 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 00:53:26.095323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:53:26.100227 systemd[1]: Reloading finished in 448 ms. Apr 14 00:53:26.180177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:53:26.210400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:53:26.229695 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 00:53:26.306759 systemd[1]: Finished ensure-sysext.service. Apr 14 00:53:26.338965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:53:26.358710 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:53:26.365779 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 00:53:26.369445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:53:26.371146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 00:53:26.377834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:53:26.392258 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:53:26.392788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:53:26.407731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:53:26.416411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:53:26.420551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:53:26.423125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 00:53:26.429761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 00:53:26.435863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:53:26.444266 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:53:26.450410 augenrules[1390]: No rules Apr 14 00:53:26.452499 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 00:53:26.462309 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 00:53:26.468617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:53:26.474079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:53:26.475128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:53:26.482924 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 00:53:26.486987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:53:26.487595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:53:26.492928 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:53:26.494668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:53:26.499941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:53:26.501379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:53:26.507511 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:53:26.508952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:53:26.514635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 00:53:26.519494 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 00:53:26.534367 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:53:26.549366 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 00:53:26.549707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:53:26.549880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:53:26.552861 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 00:53:26.556015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 00:53:26.557333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 00:53:26.558700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:53:26.560195 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:53:26.561024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 00:53:26.571251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 00:53:26.607261 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 00:53:26.710377 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 00:53:26.788387 systemd-networkd[1388]: lo: Link UP Apr 14 00:53:26.788417 systemd-networkd[1388]: lo: Gained carrier Apr 14 00:53:26.789718 systemd-networkd[1388]: Enumeration completed Apr 14 00:53:26.791402 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:53:26.791600 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:53:26.792924 systemd-networkd[1388]: eth0: Link UP Apr 14 00:53:26.792946 systemd-networkd[1388]: eth0: Gained carrier Apr 14 00:53:26.792963 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:53:26.813170 systemd-resolved[1391]: Positive Trust Anchors: Apr 14 00:53:26.813667 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:53:26.813764 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:53:26.818243 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:53:26.820651 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Apr 14 00:53:26.822925 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 00:53:26.823316 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2026-04-14 00:53:27.035055 UTC. Apr 14 00:53:26.823459 systemd-resolved[1391]: Defaulting to hostname 'linux'. Apr 14 00:53:26.846861 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 00:53:26.852889 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:53:26.857547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:53:26.861074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:53:26.864966 systemd[1]: Reached target network.target - Network. Apr 14 00:53:26.866871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:53:26.869469 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:53:26.872427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 00:53:26.876024 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 00:53:26.880305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 00:53:26.887211 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 00:53:26.887286 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:53:26.892280 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 00:53:26.898742 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 00:53:26.908923 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 00:53:26.913900 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:53:26.920518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 00:53:26.931010 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 00:53:26.953928 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 00:53:26.959486 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 00:53:26.962536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 00:53:26.964971 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:53:26.967462 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:53:26.970774 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:53:26.970814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:53:26.973107 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 00:53:26.977217 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 00:53:26.981868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 00:53:26.989712 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 00:53:26.992291 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 00:53:26.995224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 00:53:26.997022 jq[1432]: false Apr 14 00:53:27.002213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 00:53:27.014455 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 00:53:27.014576 dbus-daemon[1431]: [system] SELinux support is enabled Apr 14 00:53:27.019903 extend-filesystems[1433]: Found loop3 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found loop4 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found loop5 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found sr0 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda1 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda2 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda3 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found usr Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda4 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda6 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda7 Apr 14 00:53:27.019903 extend-filesystems[1433]: Found vda9 Apr 14 00:53:27.019903 extend-filesystems[1433]: Checking size of /dev/vda9 Apr 14 00:53:27.163115 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 00:53:27.163173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1319) Apr 14 00:53:27.027113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 00:53:27.163316 extend-filesystems[1433]: Resized partition /dev/vda9 Apr 14 00:53:27.124490 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 00:53:27.166829 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Apr 14 00:53:27.129912 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 00:53:27.130582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 00:53:27.136925 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 00:53:27.146694 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 00:53:27.149959 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 00:53:27.164241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 00:53:27.166496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 00:53:27.166833 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 00:53:27.167036 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 00:53:27.175539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 00:53:27.180128 update_engine[1450]: I20260414 00:53:27.177495 1450 main.cc:92] Flatcar Update Engine starting Apr 14 00:53:27.175836 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 00:53:27.180527 update_engine[1450]: I20260414 00:53:27.180495 1450 update_check_scheduler.cc:74] Next update check in 9m47s Apr 14 00:53:27.193129 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 00:53:27.214003 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 00:53:27.219576 systemd[1]: Started update-engine.service - Update Engine. Apr 14 00:53:27.220809 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 00:53:27.220809 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 00:53:27.220809 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 00:53:27.220628 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 00:53:27.246899 jq[1452]: true Apr 14 00:53:27.247102 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Apr 14 00:53:27.220646 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 00:53:27.221649 systemd-logind[1449]: New seat seat0. Apr 14 00:53:27.255016 jq[1465]: true Apr 14 00:53:27.222988 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 00:53:27.261980 tar[1456]: linux-amd64/LICENSE Apr 14 00:53:27.261980 tar[1456]: linux-amd64/helm Apr 14 00:53:27.224693 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 00:53:27.228700 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 00:53:27.236565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 00:53:27.236588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 00:53:27.243466 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 00:53:27.243499 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 00:53:27.247742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 00:53:27.299474 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Apr 14 00:53:27.301257 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 00:53:27.302205 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 00:53:27.307775 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 00:53:27.496387 containerd[1464]: time="2026-04-14T00:53:27.493764351Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 00:53:27.524650 containerd[1464]: time="2026-04-14T00:53:27.524557138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528620608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528664657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528678834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528830878Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528849538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528903849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.528915129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.529118730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.529131132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.529141087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530120 containerd[1464]: time="2026-04-14T00:53:27.529147475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529208048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529421211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529526942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529536872Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529602534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 00:53:27.530386 containerd[1464]: time="2026-04-14T00:53:27.529637378Z" level=info msg="metadata content store policy set" policy=shared Apr 14 00:53:27.538964 containerd[1464]: time="2026-04-14T00:53:27.538807872Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 00:53:27.540426 containerd[1464]: time="2026-04-14T00:53:27.539969465Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 00:53:27.541622 containerd[1464]: time="2026-04-14T00:53:27.541466190Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 00:53:27.561336 containerd[1464]: time="2026-04-14T00:53:27.560626428Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 00:53:27.607570 kernel: hrtimer: interrupt took 3429662 ns Apr 14 00:53:27.607626 containerd[1464]: time="2026-04-14T00:53:27.607567365Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 00:53:27.608897 containerd[1464]: time="2026-04-14T00:53:27.608612574Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 00:53:27.610132 containerd[1464]: time="2026-04-14T00:53:27.610082462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 00:53:27.610613 containerd[1464]: time="2026-04-14T00:53:27.610583392Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 00:53:27.610613 containerd[1464]: time="2026-04-14T00:53:27.610608489Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 00:53:27.610663 containerd[1464]: time="2026-04-14T00:53:27.610623126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 00:53:27.610663 containerd[1464]: time="2026-04-14T00:53:27.610646017Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610707 containerd[1464]: time="2026-04-14T00:53:27.610663046Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610707 containerd[1464]: time="2026-04-14T00:53:27.610679557Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610707 containerd[1464]: time="2026-04-14T00:53:27.610695902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610782 containerd[1464]: time="2026-04-14T00:53:27.610711575Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610782 containerd[1464]: time="2026-04-14T00:53:27.610726534Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610782 containerd[1464]: time="2026-04-14T00:53:27.610742002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610782 containerd[1464]: time="2026-04-14T00:53:27.610770018Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 00:53:27.610868 containerd[1464]: time="2026-04-14T00:53:27.610796271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610868 containerd[1464]: time="2026-04-14T00:53:27.610813108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610868 containerd[1464]: time="2026-04-14T00:53:27.610830346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610868 containerd[1464]: time="2026-04-14T00:53:27.610844673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610865572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610882502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610897718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610913593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610929733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610946676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610959721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.610976 containerd[1464]: time="2026-04-14T00:53:27.610972488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.611503 containerd[1464]: time="2026-04-14T00:53:27.610988124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.611503 containerd[1464]: time="2026-04-14T00:53:27.611005761Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 00:53:27.611503 containerd[1464]: time="2026-04-14T00:53:27.611029958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.611503 containerd[1464]: time="2026-04-14T00:53:27.611044223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.611503 containerd[1464]: time="2026-04-14T00:53:27.611402681Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.611979129Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612084568Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612101208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612116636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612128278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612143734Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612154543Z" level=info msg="NRI interface is disabled by configuration." Apr 14 00:53:27.612481 containerd[1464]: time="2026-04-14T00:53:27.612165946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 00:53:27.614531 containerd[1464]: time="2026-04-14T00:53:27.612675302Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 00:53:27.614531 containerd[1464]: time="2026-04-14T00:53:27.612752600Z" level=info msg="Connect containerd service" Apr 14 00:53:27.614531 containerd[1464]: time="2026-04-14T00:53:27.612792694Z" level=info msg="using legacy CRI server" Apr 14 00:53:27.614531 containerd[1464]: time="2026-04-14T00:53:27.612799015Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 00:53:27.614531 containerd[1464]: time="2026-04-14T00:53:27.612872706Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.614962652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615158359Z" level=info msg="Start subscribing containerd event" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615216849Z" level=info msg="Start recovering state" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615288798Z" level=info msg="Start event monitor" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615313891Z" level=info msg="Start snapshots syncer" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615324052Z" level=info msg="Start cni network conf syncer for default" Apr 14 00:53:27.615412 containerd[1464]: time="2026-04-14T00:53:27.615333304Z" level=info msg="Start streaming server" Apr 14 00:53:27.615933 containerd[1464]: time="2026-04-14T00:53:27.615894238Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 00:53:27.615999 containerd[1464]: time="2026-04-14T00:53:27.615970103Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 00:53:27.616209 containerd[1464]: time="2026-04-14T00:53:27.616097484Z" level=info msg="containerd successfully booted in 0.124457s" Apr 14 00:53:27.616192 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 00:53:27.868114 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 00:53:27.886728 tar[1456]: linux-amd64/README.md Apr 14 00:53:27.900973 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 00:53:27.905199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 00:53:27.921770 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 00:53:27.938450 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 00:53:27.938665 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 00:53:27.955785 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 00:53:27.977386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 00:53:27.994106 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 00:53:27.999200 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 00:53:28.002398 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 00:53:28.316561 systemd-networkd[1388]: eth0: Gained IPv6LL Apr 14 00:53:28.332500 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 00:53:28.338018 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 00:53:28.352500 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 00:53:28.362979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:53:28.370133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 00:53:28.412960 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 00:53:28.413187 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 00:53:28.416947 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 00:53:28.421442 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 00:53:29.570532 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 00:53:29.602169 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:59406.service - OpenSSH per-connection server daemon (10.0.0.1:59406). Apr 14 00:53:29.716224 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 59406 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:29.722939 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:29.762673 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 00:53:29.784152 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 00:53:29.841336 systemd-logind[1449]: New session 1 of user core. Apr 14 00:53:29.892965 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 00:53:29.925948 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 00:53:29.981035 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 00:53:30.236093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:53:30.244532 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 00:53:30.265599 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:53:30.358192 systemd[1543]: Queued start job for default target default.target. Apr 14 00:53:30.378481 systemd[1543]: Created slice app.slice - User Application Slice. Apr 14 00:53:30.378518 systemd[1543]: Reached target paths.target - Paths. Apr 14 00:53:30.378536 systemd[1543]: Reached target timers.target - Timers. Apr 14 00:53:30.394223 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 00:53:30.475170 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 00:53:30.476233 systemd[1543]: Reached target sockets.target - Sockets. Apr 14 00:53:30.476252 systemd[1543]: Reached target basic.target - Basic System. Apr 14 00:53:30.476307 systemd[1543]: Reached target default.target - Main User Target. Apr 14 00:53:30.476335 systemd[1543]: Startup finished in 469ms. Apr 14 00:53:30.476681 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 00:53:30.510338 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 00:53:30.653945 systemd[1]: Startup finished in 2.438s (kernel) + 8.195s (initrd) + 7.606s (userspace) = 18.240s. Apr 14 00:53:30.844995 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:59422.service - OpenSSH per-connection server daemon (10.0.0.1:59422). Apr 14 00:53:31.126760 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 59422 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:31.135755 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:31.176830 systemd-logind[1449]: New session 2 of user core. Apr 14 00:53:31.205385 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 00:53:31.391466 sshd[1569]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:31.408876 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:59422.service: Deactivated successfully. Apr 14 00:53:31.417886 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 00:53:31.426934 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Apr 14 00:53:31.492950 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:59434.service - OpenSSH per-connection server daemon (10.0.0.1:59434). Apr 14 00:53:31.496022 systemd-logind[1449]: Removed session 2. Apr 14 00:53:31.668836 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 59434 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:31.669107 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:31.707623 systemd-logind[1449]: New session 3 of user core. Apr 14 00:53:31.727292 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 00:53:31.931876 sshd[1576]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:31.956827 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:59434.service: Deactivated successfully. Apr 14 00:53:31.965640 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 00:53:31.968480 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Apr 14 00:53:31.994843 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:59436.service - OpenSSH per-connection server daemon (10.0.0.1:59436). Apr 14 00:53:31.999398 systemd-logind[1449]: Removed session 3. Apr 14 00:53:32.130213 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 59436 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:32.131812 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:32.162143 systemd-logind[1449]: New session 4 of user core. Apr 14 00:53:32.185913 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 00:53:32.295361 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 14 00:53:32.325669 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:59436.service: Deactivated successfully. Apr 14 00:53:32.334633 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 00:53:32.353153 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Apr 14 00:53:32.376954 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Apr 14 00:53:32.383856 systemd-logind[1449]: Removed session 4. Apr 14 00:53:32.519351 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:53:32.529827 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:53:32.571404 systemd-logind[1449]: New session 5 of user core. Apr 14 00:53:32.596010 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 00:53:32.686460 kubelet[1554]: E0414 00:53:32.686396 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:53:32.690940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:53:32.691495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:53:32.692539 systemd[1]: kubelet.service: Consumed 1.638s CPU time. Apr 14 00:53:32.704003 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 00:53:32.705758 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:53:33.814572 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 00:53:33.820651 (dockerd)[1615]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 00:53:34.756114 dockerd[1615]: time="2026-04-14T00:53:34.755495581Z" level=info msg="Starting up" Apr 14 00:53:34.998227 dockerd[1615]: time="2026-04-14T00:53:34.998121762Z" level=info msg="Loading containers: start." Apr 14 00:53:35.790797 kernel: Initializing XFRM netlink socket Apr 14 00:53:36.227168 systemd-networkd[1388]: docker0: Link UP Apr 14 00:53:36.370858 dockerd[1615]: time="2026-04-14T00:53:36.370441880Z" level=info msg="Loading containers: done." Apr 14 00:53:36.417938 dockerd[1615]: time="2026-04-14T00:53:36.416417481Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 00:53:36.417938 dockerd[1615]: time="2026-04-14T00:53:36.416523214Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 00:53:36.417938 dockerd[1615]: time="2026-04-14T00:53:36.416615859Z" level=info msg="Daemon has completed initialization" Apr 14 00:53:36.418655 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2539098888-merged.mount: Deactivated successfully. Apr 14 00:53:36.520797 dockerd[1615]: time="2026-04-14T00:53:36.520609456Z" level=info msg="API listen on /run/docker.sock" Apr 14 00:53:36.520934 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 00:53:38.417189 containerd[1464]: time="2026-04-14T00:53:38.417141802Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 14 00:53:39.436581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693839351.mount: Deactivated successfully. Apr 14 00:53:42.943444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 00:53:43.024395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:53:43.620296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:53:43.654590 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:53:43.956141 kubelet[1828]: E0414 00:53:43.955859 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:53:43.977414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:53:43.977568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:53:47.078707 containerd[1464]: time="2026-04-14T00:53:47.077919797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:47.083262 containerd[1464]: time="2026-04-14T00:53:47.081884329Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=27569134" Apr 14 00:53:47.087849 containerd[1464]: time="2026-04-14T00:53:47.086905130Z" level=info msg="ImageCreate event name:\"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:47.098412 containerd[1464]: time="2026-04-14T00:53:47.098278258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:47.108364 containerd[1464]: time="2026-04-14T00:53:47.108245660Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"27566295\" in 8.691053135s" Apr 14 00:53:47.108364 containerd[1464]: time="2026-04-14T00:53:47.108323032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\"" Apr 14 00:53:47.109924 containerd[1464]: time="2026-04-14T00:53:47.109198941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 14 00:53:50.808840 containerd[1464]: time="2026-04-14T00:53:50.808594611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:50.810898 containerd[1464]: time="2026-04-14T00:53:50.810485620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=21449527" Apr 14 00:53:50.817568 containerd[1464]: time="2026-04-14T00:53:50.816833247Z" level=info msg="ImageCreate event name:\"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:50.837157 containerd[1464]: time="2026-04-14T00:53:50.837008679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:50.839990 containerd[1464]: time="2026-04-14T00:53:50.839857223Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"23014443\" in 3.729918017s" Apr 14 00:53:50.839990 containerd[1464]: time="2026-04-14T00:53:50.839941017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\"" Apr 14 00:53:50.841778 containerd[1464]: time="2026-04-14T00:53:50.841693593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 14 00:53:54.264232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 00:53:54.311281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:53:55.104061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:53:55.130648 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:53:55.328334 containerd[1464]: time="2026-04-14T00:53:55.325660641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:55.328334 containerd[1464]: time="2026-04-14T00:53:55.327959967Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=15548358" Apr 14 00:53:55.338100 containerd[1464]: time="2026-04-14T00:53:55.335471691Z" level=info msg="ImageCreate event name:\"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:55.362765 containerd[1464]: time="2026-04-14T00:53:55.361948071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:53:55.371442 containerd[1464]: time="2026-04-14T00:53:55.368887549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"17113292\" in 4.527137784s" Apr 14 00:53:55.371442 containerd[1464]: time="2026-04-14T00:53:55.369114386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\"" Apr 14 00:53:55.378335 containerd[1464]: time="2026-04-14T00:53:55.375850142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 14 00:53:55.973442 kubelet[1852]: E0414 00:53:55.973294 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:53:55.982588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:53:55.984016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:54:01.210769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166188834.mount: Deactivated successfully. Apr 14 00:54:03.448587 containerd[1464]: time="2026-04-14T00:54:03.445436929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:03.450732 containerd[1464]: time="2026-04-14T00:54:03.449625661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=25685215" Apr 14 00:54:03.457060 containerd[1464]: time="2026-04-14T00:54:03.456543986Z" level=info msg="ImageCreate event name:\"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:03.511758 containerd[1464]: time="2026-04-14T00:54:03.511576407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:03.565597 containerd[1464]: time="2026-04-14T00:54:03.537737096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"25684340\" in 8.16183308s" Apr 14 00:54:03.565597 containerd[1464]: time="2026-04-14T00:54:03.537852289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\"" Apr 14 00:54:03.568024 containerd[1464]: time="2026-04-14T00:54:03.567413674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 14 00:54:04.691993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557389892.mount: Deactivated successfully. Apr 14 00:54:06.236908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 00:54:06.264012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:54:07.127580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:54:07.145712 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:54:07.526678 kubelet[1909]: E0414 00:54:07.524691 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:54:07.532835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:54:07.533562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:54:09.367520 containerd[1464]: time="2026-04-14T00:54:09.367306696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:09.369276 containerd[1464]: time="2026-04-14T00:54:09.369127200Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 14 00:54:09.374573 containerd[1464]: time="2026-04-14T00:54:09.374427240Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:09.396422 containerd[1464]: time="2026-04-14T00:54:09.396178053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:09.402067 containerd[1464]: time="2026-04-14T00:54:09.401926267Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 5.834438629s" Apr 14 00:54:09.402067 containerd[1464]: time="2026-04-14T00:54:09.402009321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 14 00:54:09.403940 containerd[1464]: time="2026-04-14T00:54:09.403593152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 00:54:10.246975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197898406.mount: Deactivated successfully. Apr 14 00:54:10.270768 containerd[1464]: time="2026-04-14T00:54:10.270349604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:10.273557 containerd[1464]: time="2026-04-14T00:54:10.273368362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 00:54:10.277880 containerd[1464]: time="2026-04-14T00:54:10.277786067Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:10.308517 containerd[1464]: time="2026-04-14T00:54:10.308175410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:10.314313 containerd[1464]: time="2026-04-14T00:54:10.313787488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 909.789558ms" Apr 14 00:54:10.314313 containerd[1464]: time="2026-04-14T00:54:10.313976636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 00:54:10.315370 containerd[1464]: time="2026-04-14T00:54:10.315304527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 14 00:54:11.209744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603395641.mount: Deactivated successfully. Apr 14 00:54:12.480362 update_engine[1450]: I20260414 00:54:12.479873 1450 update_attempter.cc:509] Updating boot flags... Apr 14 00:54:12.534804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1999) Apr 14 00:54:12.594276 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2002) Apr 14 00:54:12.676168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2002) Apr 14 00:54:13.631575 containerd[1464]: time="2026-04-14T00:54:13.630188663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:13.634334 containerd[1464]: time="2026-04-14T00:54:13.634167007Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643431" Apr 14 00:54:13.680215 containerd[1464]: time="2026-04-14T00:54:13.680104408Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:13.695246 containerd[1464]: time="2026-04-14T00:54:13.694562425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:54:13.705920 containerd[1464]: time="2026-04-14T00:54:13.705605362Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.390220304s" Apr 14 00:54:13.705920 containerd[1464]: time="2026-04-14T00:54:13.705734362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 14 00:54:17.572516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 00:54:17.595208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:54:17.969302 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 00:54:17.969917 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 00:54:17.971661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:54:17.981903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:54:18.153513 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-5.scope)... Apr 14 00:54:18.153541 systemd[1]: Reloading... Apr 14 00:54:18.376123 zram_generator::config[2100]: No configuration found. Apr 14 00:54:18.936848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:54:19.243632 systemd[1]: Reloading finished in 1089 ms. Apr 14 00:54:19.333160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:54:19.338098 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:54:19.340351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:54:19.340688 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:54:19.340946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:54:19.344671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:54:20.035726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:54:20.036723 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:54:20.347453 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:54:21.027287 kubelet[2150]: I0414 00:54:21.026240 2150 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 14 00:54:21.030005 kubelet[2150]: I0414 00:54:21.029677 2150 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:54:21.030005 kubelet[2150]: I0414 00:54:21.029790 2150 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 00:54:21.030318 kubelet[2150]: I0414 00:54:21.029820 2150 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:54:21.032743 kubelet[2150]: I0414 00:54:21.031821 2150 server.go:951] "Client rotation is on, will bootstrap in background" Apr 14 00:54:21.201420 kubelet[2150]: E0414 00:54:21.201362 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:54:21.206755 kubelet[2150]: I0414 00:54:21.205580 2150 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:54:21.258480 kubelet[2150]: E0414 00:54:21.256345 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:54:21.258480 kubelet[2150]: I0414 00:54:21.256442 2150 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 00:54:21.292678 kubelet[2150]: I0414 00:54:21.292146 2150 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 00:54:21.296375 kubelet[2150]: I0414 00:54:21.294780 2150 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:54:21.310787 kubelet[2150]: I0414 00:54:21.295853 2150 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:54:21.325183 kubelet[2150]: I0414 00:54:21.323946 2150 topology_manager.go:143] "Creating topology manager with none policy" Apr 14 00:54:21.325920 kubelet[2150]: I0414 00:54:21.325702 2150 container_manager_linux.go:308] "Creating device plugin manager" Apr 14 00:54:21.331526 kubelet[2150]: I0414 00:54:21.330807 2150 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 00:54:21.398804 kubelet[2150]: I0414 00:54:21.398361 2150 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 14 00:54:21.407004 kubelet[2150]: I0414 00:54:21.406911 2150 kubelet.go:482] "Attempting to sync node with API server" Apr 14 00:54:21.407004 kubelet[2150]: I0414 00:54:21.406976 2150 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:54:21.407004 kubelet[2150]: I0414 00:54:21.407058 2150 kubelet.go:394] "Adding apiserver pod source" Apr 14 00:54:21.407587 kubelet[2150]: I0414 00:54:21.407074 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:54:21.413866 kubelet[2150]: I0414 00:54:21.413807 2150 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:54:21.423521 kubelet[2150]: I0414 00:54:21.423023 2150 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:54:21.423521 kubelet[2150]: I0414 00:54:21.423101 2150 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 00:54:21.423828 kubelet[2150]: W0414 00:54:21.423790 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:54:21.457358 kubelet[2150]: I0414 00:54:21.457010 2150 server.go:1257] "Started kubelet" Apr 14 00:54:21.462113 kubelet[2150]: I0414 00:54:21.459979 2150 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:54:21.462113 kubelet[2150]: I0414 00:54:21.461475 2150 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 00:54:21.462348 kubelet[2150]: I0414 00:54:21.462239 2150 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:54:21.467946 kubelet[2150]: I0414 00:54:21.467251 2150 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 14 00:54:21.467946 kubelet[2150]: I0414 00:54:21.467341 2150 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:54:21.473266 kubelet[2150]: I0414 00:54:21.473219 2150 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:54:21.490112 kubelet[2150]: I0414 00:54:21.488700 2150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 00:54:21.490112 kubelet[2150]: I0414 00:54:21.486625 2150 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 14 00:54:21.493430 kubelet[2150]: E0414 00:54:21.492674 2150 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:54:21.496928 kubelet[2150]: E0414 00:54:21.496678 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Apr 14 00:54:21.499268 kubelet[2150]: I0414 00:54:21.475443 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:54:21.510563 kubelet[2150]: E0414 00:54:21.479469 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:54:21.514488 kubelet[2150]: I0414 00:54:21.512488 2150 reconciler.go:29] "Reconciler: start to sync state" Apr 14 00:54:21.516784 kubelet[2150]: I0414 00:54:21.516400 2150 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:54:21.521879 kubelet[2150]: I0414 00:54:21.521644 2150 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:54:21.539628 kubelet[2150]: E0414 00:54:21.539023 2150 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:54:21.552758 kubelet[2150]: I0414 00:54:21.552263 2150 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:54:21.593265 kubelet[2150]: E0414 00:54:21.593155 2150 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:54:21.615565 kubelet[2150]: I0414 00:54:21.615417 2150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 00:54:21.679367 kubelet[2150]: I0414 00:54:21.675592 2150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 00:54:21.679367 kubelet[2150]: I0414 00:54:21.675654 2150 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 14 00:54:21.679367 kubelet[2150]: I0414 00:54:21.675744 2150 kubelet.go:2501] "Starting kubelet main sync loop" Apr 14 00:54:21.679367 kubelet[2150]: E0414 00:54:21.675855 2150 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:54:21.694143 kubelet[2150]: E0414 00:54:21.693485 2150 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:54:21.701168 kubelet[2150]: I0414 00:54:21.701129 2150 cpu_manager.go:225] "Starting" policy="none" Apr 14 00:54:21.701357 kubelet[2150]: I0414 00:54:21.701347 2150 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 14 00:54:21.701415 kubelet[2150]: I0414 00:54:21.701408 2150 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 14 00:54:21.702393 kubelet[2150]: E0414 00:54:21.702325 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Apr 14 00:54:21.707091 kubelet[2150]: I0414 00:54:21.706981 2150 policy_none.go:50] "Start" Apr 14 00:54:21.707211 kubelet[2150]: I0414 00:54:21.707162 2150 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 00:54:21.707211 kubelet[2150]: I0414 00:54:21.707176 2150 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 00:54:21.714585 kubelet[2150]: I0414 00:54:21.714431 2150 policy_none.go:44] "Start" Apr 14 00:54:21.772583 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 00:54:21.779655 kubelet[2150]: E0414 00:54:21.777949 2150 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:54:21.808728 kubelet[2150]: E0414 00:54:21.807910 2150 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:54:21.909933 kubelet[2150]: E0414 00:54:21.908812 2150 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:54:21.912545 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 00:54:21.962624 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 00:54:21.980144 kubelet[2150]: E0414 00:54:21.979551 2150 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:54:21.985187 kubelet[2150]: E0414 00:54:21.985101 2150 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:54:21.986084 kubelet[2150]: I0414 00:54:21.985747 2150 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 14 00:54:21.986084 kubelet[2150]: I0414 00:54:21.985767 2150 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:54:21.986463 kubelet[2150]: I0414 00:54:21.986172 2150 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 14 00:54:21.991537 kubelet[2150]: E0414 00:54:21.991421 2150 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:54:21.992434 kubelet[2150]: E0414 00:54:21.991983 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:54:22.102972 kubelet[2150]: I0414 00:54:22.102676 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:22.103426 kubelet[2150]: E0414 00:54:22.103365 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Apr 14 00:54:22.103790 kubelet[2150]: E0414 00:54:22.103548 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:22.399075 kubelet[2150]: I0414 00:54:22.394244 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:22.399075 kubelet[2150]: E0414 00:54:22.394691 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:22.497794 kubelet[2150]: I0414 00:54:22.496356 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:54:22.497794 kubelet[2150]: I0414 00:54:22.496476 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:54:22.500480 kubelet[2150]: I0414 00:54:22.496509 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:54:22.546224 systemd[1]: Created slice kubepods-burstable-pod0efda43298855bbbb711a60eb1616ef3.slice - libcontainer container kubepods-burstable-pod0efda43298855bbbb711a60eb1616ef3.slice. Apr 14 00:54:22.597677 kubelet[2150]: E0414 00:54:22.597514 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:22.608361 kubelet[2150]: I0414 00:54:22.608017 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:54:22.608537 kubelet[2150]: I0414 00:54:22.608440 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:54:22.608537 kubelet[2150]: I0414 00:54:22.608469 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:54:22.608874 kubelet[2150]: I0414 00:54:22.608563 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:54:22.608874 kubelet[2150]: I0414 00:54:22.608611 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:54:22.612473 kubelet[2150]: I0414 00:54:22.612407 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c1d7ed03bb3c60facf009a5678dd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3566c1d7ed03bb3c60facf009a5678dd\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:54:22.623341 systemd[1]: Created slice kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice - libcontainer container kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice. Apr 14 00:54:22.684621 kubelet[2150]: E0414 00:54:22.683750 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:22.694458 systemd[1]: Created slice kubepods-burstable-pod3566c1d7ed03bb3c60facf009a5678dd.slice - libcontainer container kubepods-burstable-pod3566c1d7ed03bb3c60facf009a5678dd.slice. Apr 14 00:54:22.715505 kubelet[2150]: E0414 00:54:22.715406 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:22.734724 kubelet[2150]: E0414 00:54:22.734445 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:22.744109 containerd[1464]: time="2026-04-14T00:54:22.743831117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3566c1d7ed03bb3c60facf009a5678dd,Namespace:kube-system,Attempt:0,}" Apr 14 00:54:22.808341 kubelet[2150]: I0414 00:54:22.807824 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:22.829305 kubelet[2150]: E0414 00:54:22.829139 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:22.910792 kubelet[2150]: E0414 00:54:22.910593 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Apr 14 00:54:22.921858 kubelet[2150]: E0414 00:54:22.921443 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:22.930155 containerd[1464]: time="2026-04-14T00:54:22.929294773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0efda43298855bbbb711a60eb1616ef3,Namespace:kube-system,Attempt:0,}" Apr 14 00:54:23.010141 kubelet[2150]: E0414 00:54:23.009259 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:23.015866 containerd[1464]: time="2026-04-14T00:54:23.015144906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:bd70d524e6bc561f2082b467706799ed,Namespace:kube-system,Attempt:0,}" Apr 14 00:54:23.402765 kubelet[2150]: E0414 00:54:23.402456 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:54:23.690339 kubelet[2150]: I0414 00:54:23.689455 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:23.699261 kubelet[2150]: E0414 00:54:23.693871 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:23.726297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451614654.mount: Deactivated successfully. Apr 14 00:54:23.783932 containerd[1464]: time="2026-04-14T00:54:23.783477650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:54:23.803149 containerd[1464]: time="2026-04-14T00:54:23.802253670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:54:23.808272 containerd[1464]: time="2026-04-14T00:54:23.808166613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:54:23.817506 containerd[1464]: time="2026-04-14T00:54:23.817440799Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:54:23.820833 containerd[1464]: time="2026-04-14T00:54:23.820663987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:54:23.822514 containerd[1464]: time="2026-04-14T00:54:23.822350405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:54:23.837017 containerd[1464]: time="2026-04-14T00:54:23.836394357Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:54:23.956404 containerd[1464]: time="2026-04-14T00:54:23.954792780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:54:23.986280 containerd[1464]: time="2026-04-14T00:54:23.986165381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.241923966s" Apr 14 00:54:24.006777 containerd[1464]: time="2026-04-14T00:54:24.006011026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 990.788924ms" Apr 14 00:54:24.019939 containerd[1464]: time="2026-04-14T00:54:24.019753152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.089162244s" Apr 14 00:54:24.531392 kubelet[2150]: E0414 00:54:24.530357 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="3.2s" Apr 14 00:54:24.531392 kubelet[2150]: E0414 00:54:24.529355 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:54:25.318110 kubelet[2150]: I0414 00:54:25.317421 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:25.345565 kubelet[2150]: E0414 00:54:25.344706 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:26.768946 containerd[1464]: time="2026-04-14T00:54:26.759818461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:54:26.768946 containerd[1464]: time="2026-04-14T00:54:26.759899767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:54:26.768946 containerd[1464]: time="2026-04-14T00:54:26.759919914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:26.774785 containerd[1464]: time="2026-04-14T00:54:26.774485563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:27.015591 containerd[1464]: time="2026-04-14T00:54:27.015436622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:54:27.015952 containerd[1464]: time="2026-04-14T00:54:27.015871276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:54:27.016124 containerd[1464]: time="2026-04-14T00:54:27.016100458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:27.117957 containerd[1464]: time="2026-04-14T00:54:27.105926043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:27.439821 containerd[1464]: time="2026-04-14T00:54:27.401987950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:54:27.439821 containerd[1464]: time="2026-04-14T00:54:27.403285950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:54:27.439821 containerd[1464]: time="2026-04-14T00:54:27.403307399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:27.439821 containerd[1464]: time="2026-04-14T00:54:27.403533193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:54:28.448104 systemd[1]: Started cri-containerd-412a2dc9467339a248e4e4f96c5c1917f918bfa03b0421e7c7e0a4238bc9ee3f.scope - libcontainer container 412a2dc9467339a248e4e4f96c5c1917f918bfa03b0421e7c7e0a4238bc9ee3f. Apr 14 00:54:28.577535 systemd[1]: Started cri-containerd-fc697f1afa437893f5af809a240775f2291d8ff8bc8aa1f565b60edcb2ffbb68.scope - libcontainer container fc697f1afa437893f5af809a240775f2291d8ff8bc8aa1f565b60edcb2ffbb68. Apr 14 00:54:28.581992 kubelet[2150]: E0414 00:54:28.579748 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="6.4s" Apr 14 00:54:28.581992 kubelet[2150]: E0414 00:54:28.581270 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:54:28.591757 kubelet[2150]: I0414 00:54:28.591168 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:28.591757 kubelet[2150]: E0414 00:54:28.591543 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Apr 14 00:54:28.728680 systemd[1]: run-containerd-runc-k8s.io-6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b-runc.njWBDZ.mount: Deactivated successfully. Apr 14 00:54:28.790000 systemd[1]: Started cri-containerd-6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b.scope - libcontainer container 6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b. Apr 14 00:54:29.299847 containerd[1464]: time="2026-04-14T00:54:29.299623659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3566c1d7ed03bb3c60facf009a5678dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc697f1afa437893f5af809a240775f2291d8ff8bc8aa1f565b60edcb2ffbb68\"" Apr 14 00:54:29.304092 kubelet[2150]: E0414 00:54:29.303075 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:29.327822 containerd[1464]: time="2026-04-14T00:54:29.327775038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0efda43298855bbbb711a60eb1616ef3,Namespace:kube-system,Attempt:0,} returns sandbox id \"412a2dc9467339a248e4e4f96c5c1917f918bfa03b0421e7c7e0a4238bc9ee3f\"" Apr 14 00:54:29.328689 kubelet[2150]: E0414 00:54:29.328667 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:29.341618 containerd[1464]: time="2026-04-14T00:54:29.341556464Z" level=info msg="CreateContainer within sandbox \"fc697f1afa437893f5af809a240775f2291d8ff8bc8aa1f565b60edcb2ffbb68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:54:29.376253 containerd[1464]: time="2026-04-14T00:54:29.375981202Z" level=info msg="CreateContainer within sandbox \"412a2dc9467339a248e4e4f96c5c1917f918bfa03b0421e7c7e0a4238bc9ee3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:54:29.386835 containerd[1464]: time="2026-04-14T00:54:29.381471328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:bd70d524e6bc561f2082b467706799ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b\"" Apr 14 00:54:29.392781 kubelet[2150]: E0414 00:54:29.392408 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:29.528833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183094696.mount: Deactivated successfully. Apr 14 00:54:29.592152 containerd[1464]: time="2026-04-14T00:54:29.580761068Z" level=info msg="CreateContainer within sandbox \"6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:54:29.803831 containerd[1464]: time="2026-04-14T00:54:29.785352129Z" level=info msg="CreateContainer within sandbox \"fc697f1afa437893f5af809a240775f2291d8ff8bc8aa1f565b60edcb2ffbb68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2efbe5e4c7fca3468ea718aa772bcf9be63805bb33469f451269886f7d073b8f\"" Apr 14 00:54:29.838198 containerd[1464]: time="2026-04-14T00:54:29.838110834Z" level=info msg="StartContainer for \"2efbe5e4c7fca3468ea718aa772bcf9be63805bb33469f451269886f7d073b8f\"" Apr 14 00:54:29.881842 containerd[1464]: time="2026-04-14T00:54:29.864025685Z" level=info msg="CreateContainer within sandbox \"412a2dc9467339a248e4e4f96c5c1917f918bfa03b0421e7c7e0a4238bc9ee3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"085607727a24053643b93d401e591f49d1381eb53c04b721aa11aca732a46a19\"" Apr 14 00:54:29.905318 containerd[1464]: time="2026-04-14T00:54:29.901937938Z" level=info msg="StartContainer for \"085607727a24053643b93d401e591f49d1381eb53c04b721aa11aca732a46a19\"" Apr 14 00:54:30.001788 containerd[1464]: time="2026-04-14T00:54:29.997777908Z" level=info msg="CreateContainer within sandbox \"6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35\"" Apr 14 00:54:30.003334 containerd[1464]: time="2026-04-14T00:54:30.003116825Z" level=info msg="StartContainer for \"dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35\"" Apr 14 00:54:30.148557 systemd[1]: Started cri-containerd-2efbe5e4c7fca3468ea718aa772bcf9be63805bb33469f451269886f7d073b8f.scope - libcontainer container 2efbe5e4c7fca3468ea718aa772bcf9be63805bb33469f451269886f7d073b8f. Apr 14 00:54:30.167525 systemd[1]: Started cri-containerd-085607727a24053643b93d401e591f49d1381eb53c04b721aa11aca732a46a19.scope - libcontainer container 085607727a24053643b93d401e591f49d1381eb53c04b721aa11aca732a46a19. Apr 14 00:54:30.362171 systemd[1]: Started cri-containerd-dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35.scope - libcontainer container dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35. Apr 14 00:54:30.610712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631381169.mount: Deactivated successfully. Apr 14 00:54:30.785606 containerd[1464]: time="2026-04-14T00:54:30.783478084Z" level=info msg="StartContainer for \"085607727a24053643b93d401e591f49d1381eb53c04b721aa11aca732a46a19\" returns successfully" Apr 14 00:54:30.875706 containerd[1464]: time="2026-04-14T00:54:30.873216251Z" level=info msg="StartContainer for \"2efbe5e4c7fca3468ea718aa772bcf9be63805bb33469f451269886f7d073b8f\" returns successfully" Apr 14 00:54:30.912117 containerd[1464]: time="2026-04-14T00:54:30.910725064Z" level=info msg="StartContainer for \"dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35\" returns successfully" Apr 14 00:54:32.004139 kubelet[2150]: E0414 00:54:31.997280 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:54:32.031516 kubelet[2150]: E0414 00:54:32.031412 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:32.031711 kubelet[2150]: E0414 00:54:32.031601 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:32.077886 kubelet[2150]: E0414 00:54:32.077348 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:32.102242 kubelet[2150]: E0414 00:54:32.099861 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:32.107448 kubelet[2150]: E0414 00:54:32.107407 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:32.108246 kubelet[2150]: E0414 00:54:32.108204 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:33.549716 kubelet[2150]: E0414 00:54:33.549531 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:33.558681 kubelet[2150]: E0414 00:54:33.558410 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:33.567553 kubelet[2150]: E0414 00:54:33.567461 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:33.568428 kubelet[2150]: E0414 00:54:33.567952 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:33.593552 kubelet[2150]: E0414 00:54:33.582840 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:33.622486 kubelet[2150]: E0414 00:54:33.617885 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:34.785473 kubelet[2150]: E0414 00:54:34.784601 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:34.789757 kubelet[2150]: E0414 00:54:34.788702 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:34.804669 kubelet[2150]: E0414 00:54:34.800356 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:34.804669 kubelet[2150]: E0414 00:54:34.803822 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:35.018855 kubelet[2150]: I0414 00:54:35.018018 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:35.932621 kubelet[2150]: E0414 00:54:35.932312 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:35.947419 kubelet[2150]: E0414 00:54:35.945863 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:36.109494 kubelet[2150]: E0414 00:54:36.108447 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:36.121955 kubelet[2150]: E0414 00:54:36.115882 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:39.942437 kubelet[2150]: E0414 00:54:39.942367 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:39.942874 kubelet[2150]: E0414 00:54:39.942582 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:42.007898 kubelet[2150]: E0414 00:54:42.007686 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:54:44.538432 kubelet[2150]: E0414 00:54:44.535538 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:54:45.040208 kubelet[2150]: E0414 00:54:45.034936 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:54:45.096119 kubelet[2150]: E0414 00:54:45.090517 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:54:46.271251 kubelet[2150]: E0414 00:54:46.269941 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:46.284330 kubelet[2150]: E0414 00:54:46.283966 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:54:47.324587 kubelet[2150]: E0414 00:54:47.324498 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:54:52.013827 kubelet[2150]: E0414 00:54:52.013296 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:54:52.106302 kubelet[2150]: I0414 00:54:52.101443 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:54:54.510245 kubelet[2150]: E0414 00:54:54.509662 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:54:54.515995 kubelet[2150]: E0414 00:54:54.515816 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:02.026285 kubelet[2150]: E0414 00:55:02.025306 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:02.103863 kubelet[2150]: E0414 00:55:02.102481 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 00:55:02.120275 kubelet[2150]: E0414 00:55:02.117479 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:55:04.577277 kubelet[2150]: E0414 00:55:04.576755 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:55:09.150462 kubelet[2150]: I0414 00:55:09.149865 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:55:12.033009 kubelet[2150]: E0414 00:55:12.031771 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:14.999786 kubelet[2150]: E0414 00:55:14.999715 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:55:15.003767 kubelet[2150]: E0414 00:55:15.003654 2150 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:55:19.134353 kubelet[2150]: E0414 00:55:19.126913 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:55:19.193609 kubelet[2150]: E0414 00:55:19.189912 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:55:22.040378 kubelet[2150]: E0414 00:55:22.038813 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:24.626434 kubelet[2150]: E0414 00:55:24.626300 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:55:26.290754 kubelet[2150]: I0414 00:55:26.280186 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:55:32.058360 kubelet[2150]: E0414 00:55:32.058277 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:36.152468 kubelet[2150]: E0414 00:55:36.150826 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:55:36.302198 kubelet[2150]: E0414 00:55:36.300953 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:55:42.105084 kubelet[2150]: E0414 00:55:42.104974 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:43.418705 kubelet[2150]: I0414 00:55:43.410612 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:55:44.646175 kubelet[2150]: E0414 00:55:44.642525 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:55:52.106012 kubelet[2150]: E0414 00:55:52.105697 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:55:53.216981 kubelet[2150]: E0414 00:55:53.209210 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:55:53.503823 kubelet[2150]: E0414 00:55:53.449555 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:55:55.730315 kubelet[2150]: E0414 00:55:55.729857 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:55:55.730315 kubelet[2150]: E0414 00:55:55.730110 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:55:57.072643 kubelet[2150]: E0414 00:55:57.071809 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:55:58.733507 kubelet[2150]: E0414 00:55:58.708026 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:55:58.775397 kubelet[2150]: E0414 00:55:58.773804 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:56:00.592631 kubelet[2150]: I0414 00:56:00.592269 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:56:02.118375 kubelet[2150]: E0414 00:56:02.112429 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:56:04.668705 kubelet[2150]: E0414 00:56:04.667534 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:56:10.223555 kubelet[2150]: E0414 00:56:10.223342 2150 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:56:10.603856 kubelet[2150]: E0414 00:56:10.603387 2150 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:56:11.712803 kubelet[2150]: E0414 00:56:11.711303 2150 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:56:11.712803 kubelet[2150]: E0414 00:56:11.711485 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:56:12.130275 kubelet[2150]: E0414 00:56:12.128771 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:56:17.651157 kubelet[2150]: I0414 00:56:17.650697 2150 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:56:22.149559 kubelet[2150]: E0414 00:56:22.144707 2150 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:56:24.179639 kubelet[2150]: E0414 00:56:24.178983 2150 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 00:56:24.556519 kubelet[2150]: I0414 00:56:24.555577 2150 apiserver.go:52] "Watching apiserver" Apr 14 00:56:24.736699 kubelet[2150]: E0414 00:56:24.674840 2150 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6130fc2bde22f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,LastTimestamp:2026-04-14 00:54:21.456900655 +0000 UTC m=+1.400947160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:56:24.891398 kubelet[2150]: I0414 00:56:24.890967 2150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 00:56:25.025680 kubelet[2150]: I0414 00:56:25.024585 2150 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 14 00:56:25.099678 kubelet[2150]: I0414 00:56:25.097784 2150 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:56:25.412512 kubelet[2150]: E0414 00:56:25.410321 2150 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6130fc7a0c16c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:54:21.538877804 +0000 UTC m=+1.482924312,LastTimestamp:2026-04-14 00:54:21.538877804 +0000 UTC m=+1.482924312,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:56:25.643159 kubelet[2150]: E0414 00:56:25.640652 2150 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 14 00:56:26.339949 kubelet[2150]: I0414 00:56:26.339470 2150 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:56:26.399246 kubelet[2150]: E0414 00:56:26.361170 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:56:26.587129 kubelet[2150]: E0414 00:56:26.585391 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:56:26.598516 kubelet[2150]: I0414 00:56:26.598263 2150 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:56:26.790752 kubelet[2150]: E0414 00:56:26.789404 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:56:28.534584 kubelet[2150]: E0414 00:56:28.532868 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:32.940023 kubelet[2150]: I0414 00:56:32.939924 2150 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.939905701 podStartE2EDuration="6.939905701s" podCreationTimestamp="2026-04-14 00:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:56:32.460213867 +0000 UTC m=+132.404260382" watchObservedRunningTime="2026-04-14 00:56:32.939905701 +0000 UTC m=+132.883952228" Apr 14 00:56:33.532137 kubelet[2150]: I0414 00:56:33.528901 2150 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.528876166 podStartE2EDuration="7.528876166s" podCreationTimestamp="2026-04-14 00:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:56:33.034795261 +0000 UTC m=+132.978841777" watchObservedRunningTime="2026-04-14 00:56:33.528876166 +0000 UTC m=+133.472922692" Apr 14 00:56:33.547320 kubelet[2150]: E0414 00:56:33.542464 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:38.551192 kubelet[2150]: E0414 00:56:38.550474 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:43.562263 kubelet[2150]: E0414 00:56:43.560387 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:48.576178 kubelet[2150]: E0414 00:56:48.573584 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:53.590950 kubelet[2150]: E0414 00:56:53.590535 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:56:58.602820 kubelet[2150]: E0414 00:56:58.602718 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:57:02.420004 systemd[1]: cri-containerd-dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35.scope: Deactivated successfully. Apr 14 00:57:02.424353 systemd[1]: cri-containerd-dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35.scope: Consumed 9.572s CPU time. Apr 14 00:57:02.701150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35-rootfs.mount: Deactivated successfully. Apr 14 00:57:02.784998 containerd[1464]: time="2026-04-14T00:57:02.784906500Z" level=info msg="shim disconnected" id=dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35 namespace=k8s.io Apr 14 00:57:02.791864 containerd[1464]: time="2026-04-14T00:57:02.785552355Z" level=warning msg="cleaning up after shim disconnected" id=dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35 namespace=k8s.io Apr 14 00:57:02.791864 containerd[1464]: time="2026-04-14T00:57:02.785855067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:57:03.286259 kubelet[2150]: I0414 00:57:03.286174 2150 scope.go:122] "RemoveContainer" containerID="dac1a35f9bec7d38ca7ba2fb6fb0e885d02adb4ee77d763ca2cf4b042a82ec35" Apr 14 00:57:03.291352 kubelet[2150]: E0414 00:57:03.289147 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:03.331792 containerd[1464]: time="2026-04-14T00:57:03.331692508Z" level=info msg="CreateContainer within sandbox \"6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 00:57:03.540872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462636374.mount: Deactivated successfully. Apr 14 00:57:03.610147 kubelet[2150]: E0414 00:57:03.609273 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:57:03.631509 containerd[1464]: time="2026-04-14T00:57:03.630891061Z" level=info msg="CreateContainer within sandbox \"6f0840f10d63fec7fccc3bd430b0361f716ab700de2cd0bfce440c044f3a7d6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9091af333326505730c6677abf88a65c3b70f671d297749545336812213de6cd\"" Apr 14 00:57:03.645425 containerd[1464]: time="2026-04-14T00:57:03.642899526Z" level=info msg="StartContainer for \"9091af333326505730c6677abf88a65c3b70f671d297749545336812213de6cd\"" Apr 14 00:57:03.959531 systemd[1]: Started cri-containerd-9091af333326505730c6677abf88a65c3b70f671d297749545336812213de6cd.scope - libcontainer container 9091af333326505730c6677abf88a65c3b70f671d297749545336812213de6cd. Apr 14 00:57:04.047681 kubelet[2150]: I0414 00:57:04.047426 2150 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=38.047289492 podStartE2EDuration="38.047289492s" podCreationTimestamp="2026-04-14 00:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:56:33.536854238 +0000 UTC m=+133.480900746" watchObservedRunningTime="2026-04-14 00:57:04.047289492 +0000 UTC m=+163.991336015" Apr 14 00:57:04.508761 containerd[1464]: time="2026-04-14T00:57:04.508663601Z" level=info msg="StartContainer for \"9091af333326505730c6677abf88a65c3b70f671d297749545336812213de6cd\" returns successfully" Apr 14 00:57:05.531506 kubelet[2150]: E0414 00:57:05.530896 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:06.542231 kubelet[2150]: E0414 00:57:06.541550 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:08.639223 kubelet[2150]: E0414 00:57:08.638811 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:57:08.849847 kubelet[2150]: E0414 00:57:08.846704 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:13.666099 kubelet[2150]: E0414 00:57:13.665876 2150 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:57:15.788888 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-5.scope)... Apr 14 00:57:15.788907 systemd[1]: Reloading... Apr 14 00:57:16.107172 zram_generator::config[2546]: No configuration found. Apr 14 00:57:16.231374 kubelet[2150]: E0414 00:57:16.229478 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:16.622331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:57:17.007864 systemd[1]: Reloading finished in 1215 ms. Apr 14 00:57:17.230545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:57:17.232636 kubelet[2150]: I0414 00:57:17.232545 2150 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:57:17.293540 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:57:17.295593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:57:17.295803 systemd[1]: kubelet.service: Consumed 31.939s CPU time, 140.4M memory peak, 0B memory swap peak. Apr 14 00:57:17.342449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:57:18.021819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:57:18.043976 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:57:18.444753 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:57:18.494901 kubelet[2591]: I0414 00:57:18.492955 2591 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 14 00:57:18.494901 kubelet[2591]: I0414 00:57:18.493645 2591 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:57:18.494901 kubelet[2591]: I0414 00:57:18.493782 2591 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 00:57:18.494901 kubelet[2591]: I0414 00:57:18.493791 2591 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:57:18.500517 kubelet[2591]: I0414 00:57:18.498793 2591 server.go:951] "Client rotation is on, will bootstrap in background" Apr 14 00:57:18.511872 kubelet[2591]: I0414 00:57:18.511791 2591 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:57:18.568643 kubelet[2591]: I0414 00:57:18.568396 2591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:57:18.589500 kubelet[2591]: E0414 00:57:18.589299 2591 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:57:18.589723 kubelet[2591]: I0414 00:57:18.589483 2591 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 00:57:18.632584 kubelet[2591]: I0414 00:57:18.632427 2591 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 00:57:18.642704 kubelet[2591]: I0414 00:57:18.641647 2591 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:57:18.647161 kubelet[2591]: I0414 00:57:18.642977 2591 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:57:18.647701 kubelet[2591]: I0414 00:57:18.647476 2591 topology_manager.go:143] "Creating topology manager with none policy" Apr 14 00:57:18.647988 kubelet[2591]: I0414 00:57:18.647891 2591 container_manager_linux.go:308] "Creating device plugin manager" Apr 14 00:57:18.652394 kubelet[2591]: I0414 00:57:18.649821 2591 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 00:57:18.661022 kubelet[2591]: I0414 00:57:18.659134 2591 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 14 00:57:18.661022 kubelet[2591]: I0414 00:57:18.659722 2591 kubelet.go:482] "Attempting to sync node with API server" Apr 14 00:57:18.661022 kubelet[2591]: I0414 00:57:18.659743 2591 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:57:18.661022 kubelet[2591]: I0414 00:57:18.659769 2591 kubelet.go:394] "Adding apiserver pod source" Apr 14 00:57:18.661022 kubelet[2591]: I0414 00:57:18.659785 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:57:18.674615 kubelet[2591]: I0414 00:57:18.674473 2591 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:57:18.711561 kubelet[2591]: I0414 00:57:18.710948 2591 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:57:18.717513 kubelet[2591]: I0414 00:57:18.714537 2591 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 00:57:18.842379 kubelet[2591]: I0414 00:57:18.841826 2591 server.go:1257] "Started kubelet" Apr 14 00:57:18.844172 kubelet[2591]: I0414 00:57:18.842915 2591 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:57:18.851626 kubelet[2591]: I0414 00:57:18.851417 2591 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 00:57:18.852514 kubelet[2591]: I0414 00:57:18.852077 2591 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:57:18.852514 kubelet[2591]: I0414 00:57:18.852177 2591 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:57:18.874506 kubelet[2591]: I0414 00:57:18.874458 2591 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 14 00:57:18.889884 kubelet[2591]: I0414 00:57:18.889826 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:57:18.928411 kubelet[2591]: I0414 00:57:18.927645 2591 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 14 00:57:18.928411 kubelet[2591]: E0414 00:57:18.927829 2591 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:57:18.937628 kubelet[2591]: I0414 00:57:18.937567 2591 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 00:57:18.938563 kubelet[2591]: I0414 00:57:18.937926 2591 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:57:18.940310 kubelet[2591]: I0414 00:57:18.938930 2591 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:57:18.962291 kubelet[2591]: I0414 00:57:18.943333 2591 reconciler.go:29] "Reconciler: start to sync state" Apr 14 00:57:18.964228 kubelet[2591]: I0414 00:57:18.962772 2591 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:57:19.005377 kubelet[2591]: I0414 00:57:19.003472 2591 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:57:19.005377 kubelet[2591]: E0414 00:57:19.004000 2591 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:57:19.061236 kubelet[2591]: I0414 00:57:19.059616 2591 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 00:57:19.063371 kubelet[2591]: I0414 00:57:19.063263 2591 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 00:57:19.063371 kubelet[2591]: I0414 00:57:19.063316 2591 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 14 00:57:19.063567 kubelet[2591]: I0414 00:57:19.063438 2591 kubelet.go:2501] "Starting kubelet main sync loop" Apr 14 00:57:19.063567 kubelet[2591]: E0414 00:57:19.063502 2591 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:57:19.181142 kubelet[2591]: E0414 00:57:19.173800 2591 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:57:19.379648 kubelet[2591]: E0414 00:57:19.374961 2591 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:57:19.713250 kubelet[2591]: I0414 00:57:19.672629 2591 apiserver.go:52] "Watching apiserver" Apr 14 00:57:19.778269 kubelet[2591]: E0414 00:57:19.777790 2591 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:57:19.922217 kubelet[2591]: I0414 00:57:19.921834 2591 cpu_manager.go:225] "Starting" policy="none" Apr 14 00:57:19.935344 kubelet[2591]: I0414 00:57:19.933843 2591 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 14 00:57:19.936324 kubelet[2591]: I0414 00:57:19.935774 2591 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 14 00:57:19.939587 kubelet[2591]: I0414 00:57:19.938661 2591 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 14 00:57:19.939587 kubelet[2591]: I0414 00:57:19.938697 2591 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 14 00:57:19.939587 kubelet[2591]: I0414 00:57:19.939228 2591 policy_none.go:50] "Start" Apr 14 00:57:19.939587 kubelet[2591]: I0414 00:57:19.939355 2591 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 00:57:19.941633 kubelet[2591]: I0414 00:57:19.940809 2591 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 00:57:19.947115 kubelet[2591]: I0414 00:57:19.942328 2591 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 00:57:19.947115 kubelet[2591]: I0414 00:57:19.942350 2591 policy_none.go:44] "Start" Apr 14 00:57:20.035777 kubelet[2591]: E0414 00:57:20.034861 2591 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:57:20.043408 kubelet[2591]: I0414 00:57:20.037474 2591 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 14 00:57:20.043408 kubelet[2591]: I0414 00:57:20.040563 2591 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:57:20.047479 kubelet[2591]: I0414 00:57:20.045746 2591 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 14 00:57:20.051112 kubelet[2591]: E0414 00:57:20.050919 2591 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:57:20.307256 kubelet[2591]: I0414 00:57:20.306677 2591 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 00:57:20.577359 kubelet[2591]: I0414 00:57:20.576550 2591 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 14 00:57:20.579734 kubelet[2591]: I0414 00:57:20.579582 2591 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 14 00:57:20.653368 kubelet[2591]: I0414 00:57:20.650752 2591 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 00:57:20.739315 kubelet[2591]: I0414 00:57:20.738914 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:57:20.742563 kubelet[2591]: I0414 00:57:20.740468 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:57:20.742563 kubelet[2591]: I0414 00:57:20.740558 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:57:20.742563 kubelet[2591]: I0414 00:57:20.740583 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:57:20.742563 kubelet[2591]: I0414 00:57:20.740614 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c1d7ed03bb3c60facf009a5678dd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3566c1d7ed03bb3c60facf009a5678dd\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:57:20.742563 kubelet[2591]: I0414 00:57:20.740632 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:57:20.743197 kubelet[2591]: I0414 00:57:20.740657 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0efda43298855bbbb711a60eb1616ef3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0efda43298855bbbb711a60eb1616ef3\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:57:20.743197 kubelet[2591]: I0414 00:57:20.740675 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:57:20.743197 kubelet[2591]: I0414 00:57:20.740700 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:57:20.897674 kubelet[2591]: E0414 00:57:20.895549 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:20.904960 kubelet[2591]: E0414 00:57:20.904834 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:20.909244 kubelet[2591]: E0414 00:57:20.909142 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:21.385295 kubelet[2591]: E0414 00:57:21.385084 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:21.386550 kubelet[2591]: E0414 00:57:21.385875 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:57:31.233849 sudo[1596]: pam_unix(sudo:session): session closed for user root Apr 14 00:57:31.239667 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 14 00:57:31.294209 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:59450.service: Deactivated successfully. Apr 14 00:57:31.297457 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 00:57:31.298551 systemd[1]: session-5.scope: Consumed 9.457s CPU time, 164.6M memory peak, 0B memory swap peak. Apr 14 00:57:31.305544 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Apr 14 00:57:31.319491 systemd-logind[1449]: Removed session 5. Apr 14 00:57:56.133150 kubelet[2591]: I0414 00:57:56.132195 2591 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:57:56.147809 containerd[1464]: time="2026-04-14T00:57:56.147588564Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:57:56.160965 kubelet[2591]: I0414 00:57:56.160828 2591 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:58:00.438557 kubelet[2591]: I0414 00:58:00.436382 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be4bcb56-71fd-4180-97fb-d8d478da2c65-xtables-lock\") pod \"kube-proxy-c77hm\" (UID: \"be4bcb56-71fd-4180-97fb-d8d478da2c65\") " pod="kube-system/kube-proxy-c77hm" Apr 14 00:58:00.448146 kubelet[2591]: I0414 00:58:00.446087 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrw56\" (UniqueName: \"kubernetes.io/projected/be4bcb56-71fd-4180-97fb-d8d478da2c65-kube-api-access-lrw56\") pod \"kube-proxy-c77hm\" (UID: \"be4bcb56-71fd-4180-97fb-d8d478da2c65\") " pod="kube-system/kube-proxy-c77hm" Apr 14 00:58:00.448146 kubelet[2591]: I0414 00:58:00.446150 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be4bcb56-71fd-4180-97fb-d8d478da2c65-kube-proxy\") pod \"kube-proxy-c77hm\" (UID: \"be4bcb56-71fd-4180-97fb-d8d478da2c65\") " pod="kube-system/kube-proxy-c77hm" Apr 14 00:58:00.448146 kubelet[2591]: I0414 00:58:00.446168 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be4bcb56-71fd-4180-97fb-d8d478da2c65-lib-modules\") pod \"kube-proxy-c77hm\" (UID: \"be4bcb56-71fd-4180-97fb-d8d478da2c65\") " pod="kube-system/kube-proxy-c77hm" Apr 14 00:58:00.498784 systemd[1]: Created slice kubepods-besteffort-podbe4bcb56_71fd_4180_97fb_d8d478da2c65.slice - libcontainer container kubepods-besteffort-podbe4bcb56_71fd_4180_97fb_d8d478da2c65.slice. Apr 14 00:58:00.553637 kubelet[2591]: I0414 00:58:00.553220 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-cni-plugin\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.553637 kubelet[2591]: I0414 00:58:00.553289 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-cni\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.553637 kubelet[2591]: I0414 00:58:00.553311 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg4t\" (UniqueName: \"kubernetes.io/projected/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-kube-api-access-gdg4t\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.553637 kubelet[2591]: I0414 00:58:00.553340 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-run\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.553637 kubelet[2591]: I0414 00:58:00.553360 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-flannel-cfg\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.553920 kubelet[2591]: I0414 00:58:00.553405 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bdc528d-c6c5-4e1e-9ebe-7498b449e50a-xtables-lock\") pod \"kube-flannel-ds-pmh5w\" (UID: \"9bdc528d-c6c5-4e1e-9ebe-7498b449e50a\") " pod="kube-flannel/kube-flannel-ds-pmh5w" Apr 14 00:58:00.564669 systemd[1]: Created slice kubepods-burstable-pod9bdc528d_c6c5_4e1e_9ebe_7498b449e50a.slice - libcontainer container kubepods-burstable-pod9bdc528d_c6c5_4e1e_9ebe_7498b449e50a.slice. Apr 14 00:58:02.397166 kubelet[2591]: E0414 00:58:02.393618 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:02.435395 kubelet[2591]: E0414 00:58:02.435297 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:02.501595 containerd[1464]: time="2026-04-14T00:58:02.497994423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pmh5w,Uid:9bdc528d-c6c5-4e1e-9ebe-7498b449e50a,Namespace:kube-flannel,Attempt:0,}" Apr 14 00:58:02.501595 containerd[1464]: time="2026-04-14T00:58:02.498300571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c77hm,Uid:be4bcb56-71fd-4180-97fb-d8d478da2c65,Namespace:kube-system,Attempt:0,}" Apr 14 00:58:03.393977 containerd[1464]: time="2026-04-14T00:58:03.387842802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:58:03.393977 containerd[1464]: time="2026-04-14T00:58:03.388144124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:58:03.393977 containerd[1464]: time="2026-04-14T00:58:03.388170219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:58:03.400210 containerd[1464]: time="2026-04-14T00:58:03.398369505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:58:03.580495 containerd[1464]: time="2026-04-14T00:58:03.576961282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:58:03.580495 containerd[1464]: time="2026-04-14T00:58:03.577326010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:58:03.580495 containerd[1464]: time="2026-04-14T00:58:03.577369210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:58:03.580495 containerd[1464]: time="2026-04-14T00:58:03.577641334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:58:03.650693 systemd[1]: Started cri-containerd-56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913.scope - libcontainer container 56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913. Apr 14 00:58:03.898350 systemd[1]: Started cri-containerd-c7c8fbc96e6fb8a7a7ca086e3a86950a64768f09a683c3d18e5e9b19e63d00fb.scope - libcontainer container c7c8fbc96e6fb8a7a7ca086e3a86950a64768f09a683c3d18e5e9b19e63d00fb. Apr 14 00:58:04.452631 containerd[1464]: time="2026-04-14T00:58:04.452147079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pmh5w,Uid:9bdc528d-c6c5-4e1e-9ebe-7498b449e50a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\"" Apr 14 00:58:04.594146 kubelet[2591]: E0414 00:58:04.583634 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:04.720110 containerd[1464]: time="2026-04-14T00:58:04.719885980Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 14 00:58:05.292884 containerd[1464]: time="2026-04-14T00:58:05.292572343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c77hm,Uid:be4bcb56-71fd-4180-97fb-d8d478da2c65,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c8fbc96e6fb8a7a7ca086e3a86950a64768f09a683c3d18e5e9b19e63d00fb\"" Apr 14 00:58:05.294932 kubelet[2591]: E0414 00:58:05.293871 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:05.399117 containerd[1464]: time="2026-04-14T00:58:05.398855478Z" level=info msg="CreateContainer within sandbox \"c7c8fbc96e6fb8a7a7ca086e3a86950a64768f09a683c3d18e5e9b19e63d00fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:58:05.903336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197013181.mount: Deactivated successfully. Apr 14 00:58:06.205865 containerd[1464]: time="2026-04-14T00:58:06.205561059Z" level=info msg="CreateContainer within sandbox \"c7c8fbc96e6fb8a7a7ca086e3a86950a64768f09a683c3d18e5e9b19e63d00fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db9dae55a521b06242d78f20ca7c2e142e2e9d10546bc0a66407bf1453426623\"" Apr 14 00:58:06.237867 containerd[1464]: time="2026-04-14T00:58:06.237796387Z" level=info msg="StartContainer for \"db9dae55a521b06242d78f20ca7c2e142e2e9d10546bc0a66407bf1453426623\"" Apr 14 00:58:06.846434 systemd[1]: Started cri-containerd-db9dae55a521b06242d78f20ca7c2e142e2e9d10546bc0a66407bf1453426623.scope - libcontainer container db9dae55a521b06242d78f20ca7c2e142e2e9d10546bc0a66407bf1453426623. Apr 14 00:58:07.496184 containerd[1464]: time="2026-04-14T00:58:07.493637483Z" level=info msg="StartContainer for \"db9dae55a521b06242d78f20ca7c2e142e2e9d10546bc0a66407bf1453426623\" returns successfully" Apr 14 00:58:08.460219 kubelet[2591]: E0414 00:58:08.460153 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:09.640984 kubelet[2591]: E0414 00:58:09.638903 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:11.549958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373070552.mount: Deactivated successfully. Apr 14 00:58:24.082182 kubelet[2591]: E0414 00:58:24.081867 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:26.120936 kubelet[2591]: E0414 00:58:26.118628 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:29.040062 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Apr 14 00:58:29.049262 containerd[1464]: time="2026-04-14T00:58:29.039836953Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:58:29.099937 containerd[1464]: time="2026-04-14T00:58:29.098446323Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 14 00:58:29.128982 containerd[1464]: time="2026-04-14T00:58:29.128572532Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:58:29.198191 sshd[2931]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:58:29.218819 sshd[2931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:58:29.305133 containerd[1464]: time="2026-04-14T00:58:29.295759506Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:58:29.353463 systemd-logind[1449]: New session 6 of user core. Apr 14 00:58:29.361192 containerd[1464]: time="2026-04-14T00:58:29.357969528Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 24.634000626s" Apr 14 00:58:29.361192 containerd[1464]: time="2026-04-14T00:58:29.358561352Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 14 00:58:29.371895 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 00:58:29.547345 containerd[1464]: time="2026-04-14T00:58:29.541880925Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 14 00:58:29.947627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849300781.mount: Deactivated successfully. Apr 14 00:58:30.179921 containerd[1464]: time="2026-04-14T00:58:30.169917745Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c\"" Apr 14 00:58:30.219586 containerd[1464]: time="2026-04-14T00:58:30.217654461Z" level=info msg="StartContainer for \"65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c\"" Apr 14 00:58:30.692892 systemd[1]: Started cri-containerd-65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c.scope - libcontainer container 65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c. Apr 14 00:58:31.210444 systemd[1]: cri-containerd-65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c.scope: Deactivated successfully. Apr 14 00:58:31.292627 containerd[1464]: time="2026-04-14T00:58:31.290865522Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdc528d_c6c5_4e1e_9ebe_7498b449e50a.slice/cri-containerd-65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c.scope/memory.events\": no such file or directory" Apr 14 00:58:31.346808 containerd[1464]: time="2026-04-14T00:58:31.346702357Z" level=info msg="StartContainer for \"65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c\" returns successfully" Apr 14 00:58:32.170828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c-rootfs.mount: Deactivated successfully. Apr 14 00:58:32.298472 sshd[2931]: pam_unix(sshd:session): session closed for user core Apr 14 00:58:32.321260 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:46072.service: Deactivated successfully. Apr 14 00:58:32.338916 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 00:58:32.361879 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Apr 14 00:58:32.383907 systemd-logind[1449]: Removed session 6. Apr 14 00:58:32.446705 kubelet[2591]: E0414 00:58:32.427455 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:33.210867 containerd[1464]: time="2026-04-14T00:58:33.209577702Z" level=info msg="shim disconnected" id=65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c namespace=k8s.io Apr 14 00:58:33.225957 containerd[1464]: time="2026-04-14T00:58:33.217880928Z" level=warning msg="cleaning up after shim disconnected" id=65aa57986e4129f2878605e45423a0e4f174e682253939f9697d8d8feb9eb54c namespace=k8s.io Apr 14 00:58:33.225957 containerd[1464]: time="2026-04-14T00:58:33.225912340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:58:33.245627 kubelet[2591]: I0414 00:58:33.242450 2591 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-c77hm" podStartSLOduration=34.242433821 podStartE2EDuration="34.242433821s" podCreationTimestamp="2026-04-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:58:09.654632942 +0000 UTC m=+51.591941376" watchObservedRunningTime="2026-04-14 00:58:33.242433821 +0000 UTC m=+75.179742259" Apr 14 00:58:33.515180 kubelet[2591]: E0414 00:58:33.505839 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:34.511865 kubelet[2591]: E0414 00:58:34.511734 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:34.592111 containerd[1464]: time="2026-04-14T00:58:34.591688158Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 14 00:58:37.367666 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:35800.service - OpenSSH per-connection server daemon (10.0.0.1:35800). Apr 14 00:58:37.610466 sshd[3006]: Accepted publickey for core from 10.0.0.1 port 35800 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:58:37.631458 sshd[3006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:58:37.730885 systemd-logind[1449]: New session 7 of user core. Apr 14 00:58:37.741351 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 00:58:41.117524 sshd[3006]: pam_unix(sshd:session): session closed for user core Apr 14 00:58:41.135337 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:35800.service: Deactivated successfully. Apr 14 00:58:41.171519 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 00:58:41.175211 systemd[1]: session-7.scope: Consumed 1.475s CPU time. Apr 14 00:58:41.188906 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Apr 14 00:58:41.202359 systemd-logind[1449]: Removed session 7. Apr 14 00:58:45.141528 kubelet[2591]: E0414 00:58:45.141413 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:58:46.182659 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:58956.service - OpenSSH per-connection server daemon (10.0.0.1:58956). Apr 14 00:58:46.425988 sshd[3032]: Accepted publickey for core from 10.0.0.1 port 58956 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:58:46.440551 sshd[3032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:58:46.559345 systemd-logind[1449]: New session 8 of user core. Apr 14 00:58:46.577694 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 00:58:50.253854 sshd[3032]: pam_unix(sshd:session): session closed for user core Apr 14 00:58:50.411892 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:58956.service: Deactivated successfully. Apr 14 00:58:50.556186 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 00:58:50.561798 systemd[1]: session-8.scope: Consumed 1.054s CPU time. Apr 14 00:58:50.615953 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Apr 14 00:58:50.652964 systemd-logind[1449]: Removed session 8. Apr 14 00:58:55.286226 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:46304.service - OpenSSH per-connection server daemon (10.0.0.1:46304). Apr 14 00:58:55.907097 sshd[3059]: Accepted publickey for core from 10.0.0.1 port 46304 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:58:55.905810 sshd[3059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:58:55.975952 systemd-logind[1449]: New session 9 of user core. Apr 14 00:58:55.992997 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 00:58:57.936943 sshd[3059]: pam_unix(sshd:session): session closed for user core Apr 14 00:58:58.025173 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:46304.service: Deactivated successfully. Apr 14 00:58:58.066800 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:58:58.094426 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:58:58.115379 systemd-logind[1449]: Removed session 9. Apr 14 00:58:59.947273 containerd[1464]: time="2026-04-14T00:58:59.947218551Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:59:00.009337 containerd[1464]: time="2026-04-14T00:59:00.007454173Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 14 00:59:00.080337 containerd[1464]: time="2026-04-14T00:59:00.080216181Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:59:00.286072 containerd[1464]: time="2026-04-14T00:59:00.275586597Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:59:00.338336 containerd[1464]: time="2026-04-14T00:59:00.333902658Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 25.741897396s" Apr 14 00:59:00.338336 containerd[1464]: time="2026-04-14T00:59:00.333994378Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 14 00:59:00.546825 containerd[1464]: time="2026-04-14T00:59:00.546129178Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 00:59:01.921820 containerd[1464]: time="2026-04-14T00:59:01.921687028Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250\"" Apr 14 00:59:01.952114 containerd[1464]: time="2026-04-14T00:59:01.950851210Z" level=info msg="StartContainer for \"2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250\"" Apr 14 00:59:02.179667 systemd[1]: Started cri-containerd-2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250.scope - libcontainer container 2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250. Apr 14 00:59:02.614132 systemd[1]: cri-containerd-2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250.scope: Deactivated successfully. Apr 14 00:59:02.649081 containerd[1464]: time="2026-04-14T00:59:02.648636669Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdc528d_c6c5_4e1e_9ebe_7498b449e50a.slice/cri-containerd-2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250.scope/memory.events\": no such file or directory" Apr 14 00:59:02.705114 kubelet[2591]: I0414 00:59:02.704932 2591 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 14 00:59:02.831303 containerd[1464]: time="2026-04-14T00:59:02.822923573Z" level=info msg="StartContainer for \"2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250\" returns successfully" Apr 14 00:59:03.031790 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:46314.service - OpenSSH per-connection server daemon (10.0.0.1:46314). Apr 14 00:59:03.290936 sshd[3130]: Accepted publickey for core from 10.0.0.1 port 46314 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:03.292138 sshd[3130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:03.369647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250-rootfs.mount: Deactivated successfully. Apr 14 00:59:03.401521 systemd-logind[1449]: New session 10 of user core. Apr 14 00:59:03.414286 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:59:04.043823 containerd[1464]: time="2026-04-14T00:59:04.040489294Z" level=info msg="shim disconnected" id=2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250 namespace=k8s.io Apr 14 00:59:04.053729 containerd[1464]: time="2026-04-14T00:59:04.053511018Z" level=warning msg="cleaning up after shim disconnected" id=2083d96b4dc98534f73c241a87b689c54aedaf2a4e4ea93160e43ce74dde2250 namespace=k8s.io Apr 14 00:59:04.056024 containerd[1464]: time="2026-04-14T00:59:04.055630255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:59:04.224348 kubelet[2591]: E0414 00:59:04.222307 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:04.350561 containerd[1464]: time="2026-04-14T00:59:04.350206911Z" level=warning msg="cleanup warnings time=\"2026-04-14T00:59:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 00:59:05.561281 kubelet[2591]: E0414 00:59:05.561136 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:05.779313 containerd[1464]: time="2026-04-14T00:59:05.778643427Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 14 00:59:06.215777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134685806.mount: Deactivated successfully. Apr 14 00:59:06.291528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159892155.mount: Deactivated successfully. Apr 14 00:59:06.536744 containerd[1464]: time="2026-04-14T00:59:06.534246531Z" level=info msg="CreateContainer within sandbox \"56b136db26ab2969aae686bf9062c8f0f5a42bcb6ab82113e3a7f296030db913\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2bc88f4cfe33ad90c3627ddbe5cf1fad65603923ba06455d6ef0638e16102ca6\"" Apr 14 00:59:06.549513 containerd[1464]: time="2026-04-14T00:59:06.549455034Z" level=info msg="StartContainer for \"2bc88f4cfe33ad90c3627ddbe5cf1fad65603923ba06455d6ef0638e16102ca6\"" Apr 14 00:59:06.608929 systemd[1]: Created slice kubepods-burstable-podf6790ece_ae8a_4c0e_b807_434722585b34.slice - libcontainer container kubepods-burstable-podf6790ece_ae8a_4c0e_b807_434722585b34.slice. Apr 14 00:59:06.621704 kubelet[2591]: I0414 00:59:06.619357 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnzx\" (UniqueName: \"kubernetes.io/projected/6529c0dd-27f2-4cb0-872c-892c7262ad8e-kube-api-access-5jnzx\") pod \"coredns-7d764666f9-kgl2t\" (UID: \"6529c0dd-27f2-4cb0-872c-892c7262ad8e\") " pod="kube-system/coredns-7d764666f9-kgl2t" Apr 14 00:59:06.629300 kubelet[2591]: I0414 00:59:06.628123 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgf4\" (UniqueName: \"kubernetes.io/projected/f6790ece-ae8a-4c0e-b807-434722585b34-kube-api-access-rfgf4\") pod \"coredns-7d764666f9-ffgf4\" (UID: \"f6790ece-ae8a-4c0e-b807-434722585b34\") " pod="kube-system/coredns-7d764666f9-ffgf4" Apr 14 00:59:06.629300 kubelet[2591]: I0414 00:59:06.628209 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6790ece-ae8a-4c0e-b807-434722585b34-config-volume\") pod \"coredns-7d764666f9-ffgf4\" (UID: \"f6790ece-ae8a-4c0e-b807-434722585b34\") " pod="kube-system/coredns-7d764666f9-ffgf4" Apr 14 00:59:06.639085 kubelet[2591]: I0414 00:59:06.636584 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6529c0dd-27f2-4cb0-872c-892c7262ad8e-config-volume\") pod \"coredns-7d764666f9-kgl2t\" (UID: \"6529c0dd-27f2-4cb0-872c-892c7262ad8e\") " pod="kube-system/coredns-7d764666f9-kgl2t" Apr 14 00:59:06.657843 sshd[3130]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:06.663397 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:46314.service: Deactivated successfully. Apr 14 00:59:06.671897 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:59:06.674997 systemd[1]: session-10.scope: Consumed 1.181s CPU time. Apr 14 00:59:06.679754 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:59:06.703292 systemd-logind[1449]: Removed session 10. Apr 14 00:59:06.718220 systemd[1]: Started cri-containerd-2bc88f4cfe33ad90c3627ddbe5cf1fad65603923ba06455d6ef0638e16102ca6.scope - libcontainer container 2bc88f4cfe33ad90c3627ddbe5cf1fad65603923ba06455d6ef0638e16102ca6. Apr 14 00:59:06.732542 systemd[1]: Created slice kubepods-burstable-pod6529c0dd_27f2_4cb0_872c_892c7262ad8e.slice - libcontainer container kubepods-burstable-pod6529c0dd_27f2_4cb0_872c_892c7262ad8e.slice. Apr 14 00:59:06.927235 containerd[1464]: time="2026-04-14T00:59:06.927169975Z" level=info msg="StartContainer for \"2bc88f4cfe33ad90c3627ddbe5cf1fad65603923ba06455d6ef0638e16102ca6\" returns successfully" Apr 14 00:59:07.181463 kubelet[2591]: E0414 00:59:07.177451 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:07.192756 containerd[1464]: time="2026-04-14T00:59:07.192179680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kgl2t,Uid:6529c0dd-27f2-4cb0-872c-892c7262ad8e,Namespace:kube-system,Attempt:0,}" Apr 14 00:59:07.421383 kubelet[2591]: E0414 00:59:07.420493 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:07.435761 containerd[1464]: time="2026-04-14T00:59:07.434506830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ffgf4,Uid:f6790ece-ae8a-4c0e-b807-434722585b34,Namespace:kube-system,Attempt:0,}" Apr 14 00:59:07.803108 kubelet[2591]: E0414 00:59:07.802836 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:08.521552 systemd[1]: run-netns-cni\x2da67e241e\x2d2634\x2d0582\x2dbaad\x2d2141e60180a0.mount: Deactivated successfully. Apr 14 00:59:08.528478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5-shm.mount: Deactivated successfully. Apr 14 00:59:09.233580 containerd[1464]: time="2026-04-14T00:59:09.231522422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kgl2t,Uid:6529c0dd-27f2-4cb0-872c-892c7262ad8e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:59:09.239769 kubelet[2591]: E0414 00:59:09.237486 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:59:09.250125 kubelet[2591]: I0414 00:59:09.237503 2591 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pmh5w" podStartSLOduration=9.328109773 podStartE2EDuration="1m10.237486185s" podCreationTimestamp="2026-04-14 00:57:59 +0000 UTC" firstStartedPulling="2026-04-14 00:58:04.686722357 +0000 UTC m=+46.624030817" lastFinishedPulling="2026-04-14 00:59:05.596098787 +0000 UTC m=+107.533407229" observedRunningTime="2026-04-14 00:59:09.204260323 +0000 UTC m=+111.141568770" watchObservedRunningTime="2026-04-14 00:59:09.237486185 +0000 UTC m=+111.174794629" Apr 14 00:59:09.252611 kubelet[2591]: E0414 00:59:09.250999 2591 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kgl2t" Apr 14 00:59:09.252611 kubelet[2591]: E0414 00:59:09.251460 2591 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kgl2t" Apr 14 00:59:09.252611 kubelet[2591]: E0414 00:59:09.251559 2591 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kgl2t_kube-system(6529c0dd-27f2-4cb0-872c-892c7262ad8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kgl2t_kube-system(6529c0dd-27f2-4cb0-872c-892c7262ad8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad03c63c60a042906e2c6fde9acb71582098e4c2159246605b61c2f06da2a5d5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-kgl2t" podUID="6529c0dd-27f2-4cb0-872c-892c7262ad8e" Apr 14 00:59:09.855565 systemd[1]: run-netns-cni\x2d8a4bb299\x2d1a4e\x2dc8fd\x2d0e04\x2d3daa6fb9b6f6.mount: Deactivated successfully. Apr 14 00:59:09.870010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71-shm.mount: Deactivated successfully. Apr 14 00:59:10.030844 containerd[1464]: time="2026-04-14T00:59:10.030727788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ffgf4,Uid:f6790ece-ae8a-4c0e-b807-434722585b34,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:59:10.034768 kubelet[2591]: E0414 00:59:10.032750 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:59:10.034768 kubelet[2591]: E0414 00:59:10.032863 2591 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-ffgf4" Apr 14 00:59:10.034768 kubelet[2591]: E0414 00:59:10.032890 2591 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-ffgf4" Apr 14 00:59:10.034768 kubelet[2591]: E0414 00:59:10.032964 2591 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ffgf4_kube-system(f6790ece-ae8a-4c0e-b807-434722585b34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ffgf4_kube-system(f6790ece-ae8a-4c0e-b807-434722585b34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a0cf89e66eba579cb5abc88befc326e18f27d1ab5fcc1f9b426bc931c73bf71\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-ffgf4" podUID="f6790ece-ae8a-4c0e-b807-434722585b34" Apr 14 00:59:10.290913 systemd-networkd[1388]: flannel.1: Link UP Apr 14 00:59:10.290925 systemd-networkd[1388]: flannel.1: Gained carrier Apr 14 00:59:11.712581 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:47504.service - OpenSSH per-connection server daemon (10.0.0.1:47504). Apr 14 00:59:11.737365 systemd-networkd[1388]: flannel.1: Gained IPv6LL Apr 14 00:59:12.081392 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 47504 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:12.096620 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:12.305718 systemd-logind[1449]: New session 11 of user core. Apr 14 00:59:12.346831 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:59:15.091366 sshd[3329]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:15.118574 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:47504.service: Deactivated successfully. Apr 14 00:59:15.138443 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:59:15.145249 systemd[1]: session-11.scope: Consumed 1.106s CPU time. Apr 14 00:59:15.171722 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:59:15.186868 systemd-logind[1449]: Removed session 11. Apr 14 00:59:20.148077 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:51076.service - OpenSSH per-connection server daemon (10.0.0.1:51076). Apr 14 00:59:20.239371 sshd[3372]: Accepted publickey for core from 10.0.0.1 port 51076 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:20.251477 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:20.286798 systemd-logind[1449]: New session 12 of user core. Apr 14 00:59:20.308844 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:59:21.167195 kubelet[2591]: E0414 00:59:21.167140 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:21.172165 containerd[1464]: time="2026-04-14T00:59:21.171406310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kgl2t,Uid:6529c0dd-27f2-4cb0-872c-892c7262ad8e,Namespace:kube-system,Attempt:0,}" Apr 14 00:59:21.295946 kubelet[2591]: E0414 00:59:21.294720 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:21.301757 containerd[1464]: time="2026-04-14T00:59:21.298893307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ffgf4,Uid:f6790ece-ae8a-4c0e-b807-434722585b34,Namespace:kube-system,Attempt:0,}" Apr 14 00:59:21.892761 sshd[3372]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:21.903017 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:51076.service: Deactivated successfully. Apr 14 00:59:21.935479 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:59:21.957512 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:59:22.021200 systemd-logind[1449]: Removed session 12. Apr 14 00:59:22.167687 systemd-networkd[1388]: cni0: Link UP Apr 14 00:59:22.167701 systemd-networkd[1388]: cni0: Gained carrier Apr 14 00:59:22.189382 systemd-networkd[1388]: cni0: Lost carrier Apr 14 00:59:22.223330 systemd-networkd[1388]: vethd9d3112b: Link UP Apr 14 00:59:22.279640 kernel: cni0: port 1(vethd9d3112b) entered blocking state Apr 14 00:59:22.280013 kernel: cni0: port 1(vethd9d3112b) entered disabled state Apr 14 00:59:22.280633 kernel: vethd9d3112b: entered allmulticast mode Apr 14 00:59:22.294141 kernel: vethd9d3112b: entered promiscuous mode Apr 14 00:59:22.309644 kernel: cni0: port 1(vethd9d3112b) entered blocking state Apr 14 00:59:22.309899 kernel: cni0: port 1(vethd9d3112b) entered forwarding state Apr 14 00:59:22.318179 kernel: cni0: port 1(vethd9d3112b) entered disabled state Apr 14 00:59:22.413094 kernel: cni0: port 1(vethd9d3112b) entered blocking state Apr 14 00:59:22.417543 kernel: cni0: port 1(vethd9d3112b) entered forwarding state Apr 14 00:59:22.418388 systemd-networkd[1388]: vethd9d3112b: Gained carrier Apr 14 00:59:22.418603 systemd-networkd[1388]: cni0: Gained carrier Apr 14 00:59:22.435968 kernel: cni0: port 2(veth5d148857) entered blocking state Apr 14 00:59:22.440956 kernel: cni0: port 2(veth5d148857) entered disabled state Apr 14 00:59:22.444366 systemd-networkd[1388]: veth5d148857: Link UP Apr 14 00:59:22.445076 kernel: veth5d148857: entered allmulticast mode Apr 14 00:59:22.449987 kernel: veth5d148857: entered promiscuous mode Apr 14 00:59:22.454979 kernel: cni0: port 2(veth5d148857) entered blocking state Apr 14 00:59:22.455237 kernel: cni0: port 2(veth5d148857) entered forwarding state Apr 14 00:59:22.455704 containerd[1464]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000096260), "name":"cbr0", "type":"bridge"} Apr 14 00:59:22.455704 containerd[1464]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:59:22.493846 systemd-networkd[1388]: veth5d148857: Gained carrier Apr 14 00:59:22.512467 containerd[1464]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 14 00:59:22.512467 containerd[1464]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 14 00:59:22.512467 containerd[1464]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:59:22.992445 containerd[1464]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T00:59:22.984921469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:59:22.992445 containerd[1464]: time="2026-04-14T00:59:22.990914855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:59:22.992445 containerd[1464]: time="2026-04-14T00:59:22.990968136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:59:23.027132 containerd[1464]: time="2026-04-14T00:59:23.026751449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:59:23.187307 systemd[1]: Started cri-containerd-9501b43ebca8c853a54e260c4547279aa6b45136a2bc72831bf24d7c5fba5627.scope - libcontainer container 9501b43ebca8c853a54e260c4547279aa6b45136a2bc72831bf24d7c5fba5627. Apr 14 00:59:23.375989 containerd[1464]: time="2026-04-14T00:59:23.350944582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:59:23.375989 containerd[1464]: time="2026-04-14T00:59:23.355815891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:59:23.375989 containerd[1464]: time="2026-04-14T00:59:23.355854121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:59:23.449706 containerd[1464]: time="2026-04-14T00:59:23.443488102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:59:23.554717 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:59:23.764850 systemd[1]: Started cri-containerd-c7183fe0465168e25b2f5a2c1cd1875cfba8abc9ce50c01d00a2a80138c86c4e.scope - libcontainer container c7183fe0465168e25b2f5a2c1cd1875cfba8abc9ce50c01d00a2a80138c86c4e. Apr 14 00:59:23.833655 systemd-networkd[1388]: cni0: Gained IPv6LL Apr 14 00:59:24.023357 systemd-networkd[1388]: vethd9d3112b: Gained IPv6LL Apr 14 00:59:24.143809 containerd[1464]: time="2026-04-14T00:59:24.124670536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ffgf4,Uid:f6790ece-ae8a-4c0e-b807-434722585b34,Namespace:kube-system,Attempt:0,} returns sandbox id \"9501b43ebca8c853a54e260c4547279aa6b45136a2bc72831bf24d7c5fba5627\"" Apr 14 00:59:24.156432 kubelet[2591]: E0414 00:59:24.156317 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:24.166529 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:59:24.344329 systemd-networkd[1388]: veth5d148857: Gained IPv6LL Apr 14 00:59:24.566186 containerd[1464]: time="2026-04-14T00:59:24.565948351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kgl2t,Uid:6529c0dd-27f2-4cb0-872c-892c7262ad8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7183fe0465168e25b2f5a2c1cd1875cfba8abc9ce50c01d00a2a80138c86c4e\"" Apr 14 00:59:24.617749 containerd[1464]: time="2026-04-14T00:59:24.617261167Z" level=info msg="CreateContainer within sandbox \"9501b43ebca8c853a54e260c4547279aa6b45136a2bc72831bf24d7c5fba5627\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:59:24.627094 kubelet[2591]: E0414 00:59:24.626713 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:24.720763 containerd[1464]: time="2026-04-14T00:59:24.720304006Z" level=info msg="CreateContainer within sandbox \"c7183fe0465168e25b2f5a2c1cd1875cfba8abc9ce50c01d00a2a80138c86c4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:59:25.067553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690554032.mount: Deactivated successfully. Apr 14 00:59:25.089502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427170803.mount: Deactivated successfully. Apr 14 00:59:25.697559 containerd[1464]: time="2026-04-14T00:59:25.696952825Z" level=info msg="CreateContainer within sandbox \"9501b43ebca8c853a54e260c4547279aa6b45136a2bc72831bf24d7c5fba5627\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"076c052fd69199cd1388f9a6df2ed90da366641aa586bd71505ab551446b8776\"" Apr 14 00:59:25.721643 containerd[1464]: time="2026-04-14T00:59:25.720812183Z" level=info msg="StartContainer for \"076c052fd69199cd1388f9a6df2ed90da366641aa586bd71505ab551446b8776\"" Apr 14 00:59:25.816733 containerd[1464]: time="2026-04-14T00:59:25.811530165Z" level=info msg="CreateContainer within sandbox \"c7183fe0465168e25b2f5a2c1cd1875cfba8abc9ce50c01d00a2a80138c86c4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"425cf28ae52622bafde6565649dce314ef48ae29e696de33a12ae5d2b98c6aff\"" Apr 14 00:59:25.912953 containerd[1464]: time="2026-04-14T00:59:25.909624230Z" level=info msg="StartContainer for \"425cf28ae52622bafde6565649dce314ef48ae29e696de33a12ae5d2b98c6aff\"" Apr 14 00:59:26.248693 systemd[1]: Started cri-containerd-076c052fd69199cd1388f9a6df2ed90da366641aa586bd71505ab551446b8776.scope - libcontainer container 076c052fd69199cd1388f9a6df2ed90da366641aa586bd71505ab551446b8776. Apr 14 00:59:26.469017 systemd[1]: Started cri-containerd-425cf28ae52622bafde6565649dce314ef48ae29e696de33a12ae5d2b98c6aff.scope - libcontainer container 425cf28ae52622bafde6565649dce314ef48ae29e696de33a12ae5d2b98c6aff. Apr 14 00:59:27.014470 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:55712.service - OpenSSH per-connection server daemon (10.0.0.1:55712). Apr 14 00:59:27.331108 containerd[1464]: time="2026-04-14T00:59:27.318442079Z" level=info msg="StartContainer for \"076c052fd69199cd1388f9a6df2ed90da366641aa586bd71505ab551446b8776\" returns successfully" Apr 14 00:59:27.331108 containerd[1464]: time="2026-04-14T00:59:27.318906016Z" level=info msg="StartContainer for \"425cf28ae52622bafde6565649dce314ef48ae29e696de33a12ae5d2b98c6aff\" returns successfully" Apr 14 00:59:27.467078 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 55712 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:27.469935 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:27.511871 systemd-logind[1449]: New session 13 of user core. Apr 14 00:59:27.522671 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:59:27.571084 kubelet[2591]: E0414 00:59:27.569891 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:28.734210 kubelet[2591]: E0414 00:59:28.729631 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:28.740427 kubelet[2591]: E0414 00:59:28.733880 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:29.195112 kubelet[2591]: E0414 00:59:29.194744 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:29.887104 kubelet[2591]: E0414 00:59:29.886146 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:29.887104 kubelet[2591]: E0414 00:59:29.886418 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:29.951309 sshd[3643]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:30.077809 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:55720.service - OpenSSH per-connection server daemon (10.0.0.1:55720). Apr 14 00:59:30.083884 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:55712.service: Deactivated successfully. Apr 14 00:59:30.107859 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:59:30.125182 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:59:30.143189 systemd-logind[1449]: Removed session 13. Apr 14 00:59:30.957424 kubelet[2591]: I0414 00:59:30.955420 2591 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kgl2t" podStartSLOduration=91.955342879 podStartE2EDuration="1m31.955342879s" podCreationTimestamp="2026-04-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:59:30.025280437 +0000 UTC m=+131.962588879" watchObservedRunningTime="2026-04-14 00:59:30.955342879 +0000 UTC m=+132.892651344" Apr 14 00:59:31.007985 kubelet[2591]: E0414 00:59:31.007809 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:31.008362 kubelet[2591]: E0414 00:59:31.008293 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:31.592876 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 55720 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:31.601905 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:31.666612 systemd-logind[1449]: New session 14 of user core. Apr 14 00:59:31.685482 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:59:32.373747 kubelet[2591]: I0414 00:59:32.373296 2591 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ffgf4" podStartSLOduration=93.372467425 podStartE2EDuration="1m33.372467425s" podCreationTimestamp="2026-04-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:59:31.096593746 +0000 UTC m=+133.033902194" watchObservedRunningTime="2026-04-14 00:59:32.372467425 +0000 UTC m=+134.309775918" Apr 14 00:59:33.070644 kubelet[2591]: E0414 00:59:33.070001 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:35.063367 sshd[3694]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:35.113812 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:55720.service: Deactivated successfully. Apr 14 00:59:35.117598 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:59:35.121411 systemd[1]: session-14.scope: Consumed 1.032s CPU time. Apr 14 00:59:35.138864 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:59:35.197285 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:55732.service - OpenSSH per-connection server daemon (10.0.0.1:55732). Apr 14 00:59:35.207803 systemd-logind[1449]: Removed session 14. Apr 14 00:59:35.249630 kubelet[2591]: E0414 00:59:35.248593 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:59:35.466351 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 55732 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:35.467448 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:35.550881 systemd-logind[1449]: New session 15 of user core. Apr 14 00:59:35.596727 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:59:37.592946 sshd[3737]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:37.652477 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:55732.service: Deactivated successfully. Apr 14 00:59:37.669554 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:59:37.705101 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:59:37.713719 systemd-logind[1449]: Removed session 15. Apr 14 00:59:42.720599 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Apr 14 00:59:43.008659 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:43.052824 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:43.103690 systemd-logind[1449]: New session 16 of user core. Apr 14 00:59:43.115746 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:59:44.013076 sshd[3780]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:44.039488 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:59:44.045788 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:57112.service: Deactivated successfully. Apr 14 00:59:44.061572 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:59:44.084755 systemd-logind[1449]: Removed session 16. Apr 14 00:59:49.138647 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:41266.service - OpenSSH per-connection server daemon (10.0.0.1:41266). Apr 14 00:59:49.376322 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 41266 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:49.390847 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:49.527790 systemd-logind[1449]: New session 17 of user core. Apr 14 00:59:49.563959 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:59:51.654507 sshd[3830]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:51.683360 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:41266.service: Deactivated successfully. Apr 14 00:59:51.724399 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:59:51.756302 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:59:51.769101 systemd-logind[1449]: Removed session 17. Apr 14 00:59:56.736981 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:41664.service - OpenSSH per-connection server daemon (10.0.0.1:41664). Apr 14 00:59:56.861805 sshd[3864]: Accepted publickey for core from 10.0.0.1 port 41664 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:59:56.867481 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:59:57.025681 systemd-logind[1449]: New session 18 of user core. Apr 14 00:59:57.066013 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:59:58.146321 sshd[3864]: pam_unix(sshd:session): session closed for user core Apr 14 00:59:58.239665 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:41664.service: Deactivated successfully. Apr 14 00:59:58.260330 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:59:58.270689 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:59:58.273592 systemd-logind[1449]: Removed session 18. Apr 14 01:00:03.348789 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:41670.service - OpenSSH per-connection server daemon (10.0.0.1:41670). Apr 14 01:00:03.573729 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 41670 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:03.624456 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:03.734537 systemd-logind[1449]: New session 19 of user core. Apr 14 01:00:03.759446 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 01:00:05.301270 sshd[3904]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:05.348723 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:41670.service: Deactivated successfully. Apr 14 01:00:05.429501 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 01:00:05.447505 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Apr 14 01:00:05.467174 systemd-logind[1449]: Removed session 19. Apr 14 01:00:09.074801 kubelet[2591]: E0414 01:00:09.074226 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:10.372348 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:35078.service - OpenSSH per-connection server daemon (10.0.0.1:35078). Apr 14 01:00:10.809542 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 35078 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:10.827703 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:10.919867 systemd-logind[1449]: New session 20 of user core. Apr 14 01:00:10.923645 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 01:00:11.128995 kubelet[2591]: E0414 01:00:11.125692 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:12.207151 sshd[3953]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:12.268882 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:35078.service: Deactivated successfully. Apr 14 01:00:12.282232 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 01:00:12.294951 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Apr 14 01:00:12.303567 systemd-logind[1449]: Removed session 20. Apr 14 01:00:17.293162 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:56994.service - OpenSSH per-connection server daemon (10.0.0.1:56994). Apr 14 01:00:17.739161 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 56994 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:17.752665 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:17.819272 systemd-logind[1449]: New session 21 of user core. Apr 14 01:00:17.843938 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 01:00:19.344944 sshd[3990]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:19.585395 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:56994.service: Deactivated successfully. Apr 14 01:00:19.618660 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 01:00:19.626523 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Apr 14 01:00:19.650119 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:57000.service - OpenSSH per-connection server daemon (10.0.0.1:57000). Apr 14 01:00:19.660177 systemd-logind[1449]: Removed session 21. Apr 14 01:00:19.885499 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 57000 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:19.891804 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:20.029221 systemd-logind[1449]: New session 22 of user core. Apr 14 01:00:20.076590 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 01:00:22.793853 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:22.829437 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:57000.service: Deactivated successfully. Apr 14 01:00:22.839977 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 01:00:22.840508 systemd[1]: session-22.scope: Consumed 1.015s CPU time. Apr 14 01:00:22.865173 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Apr 14 01:00:22.878360 systemd-logind[1449]: Removed session 22. Apr 14 01:00:22.943525 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Apr 14 01:00:23.255553 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:23.270485 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:23.393712 systemd-logind[1449]: New session 23 of user core. Apr 14 01:00:23.414411 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 01:00:31.095112 kubelet[2591]: E0414 01:00:31.095015 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:36.141781 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:36.201962 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:57008.service: Deactivated successfully. Apr 14 01:00:36.213979 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 01:00:36.216225 systemd[1]: session-23.scope: Consumed 2.931s CPU time. Apr 14 01:00:36.226677 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Apr 14 01:00:36.327719 systemd[1]: Started sshd@23-10.0.0.73:22-10.0.0.1:35530.service - OpenSSH per-connection server daemon (10.0.0.1:35530). Apr 14 01:00:36.345745 systemd-logind[1449]: Removed session 23. Apr 14 01:00:36.640586 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:36.648345 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:36.686750 systemd-logind[1449]: New session 24 of user core. Apr 14 01:00:36.707672 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 01:00:39.097217 kubelet[2591]: E0414 01:00:39.094318 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:41.400764 sshd[4106]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:41.505518 systemd[1]: sshd@23-10.0.0.73:22-10.0.0.1:35530.service: Deactivated successfully. Apr 14 01:00:41.539367 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 01:00:41.540781 systemd[1]: session-24.scope: Consumed 2.298s CPU time. Apr 14 01:00:41.568318 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Apr 14 01:00:41.597410 systemd[1]: Started sshd@24-10.0.0.73:22-10.0.0.1:35542.service - OpenSSH per-connection server daemon (10.0.0.1:35542). Apr 14 01:00:41.625887 systemd-logind[1449]: Removed session 24. Apr 14 01:00:41.826282 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 35542 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:41.849020 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:41.986622 systemd-logind[1449]: New session 25 of user core. Apr 14 01:00:42.001732 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 01:00:43.456184 sshd[4141]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:43.499021 systemd[1]: sshd@24-10.0.0.73:22-10.0.0.1:35542.service: Deactivated successfully. Apr 14 01:00:43.540387 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 01:00:43.567848 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Apr 14 01:00:43.579097 systemd-logind[1449]: Removed session 25. Apr 14 01:00:47.072086 kubelet[2591]: E0414 01:00:47.071594 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:48.655366 systemd[1]: Started sshd@25-10.0.0.73:22-10.0.0.1:56910.service - OpenSSH per-connection server daemon (10.0.0.1:56910). Apr 14 01:00:49.125755 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 56910 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:49.150989 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:49.217351 systemd-logind[1449]: New session 26 of user core. Apr 14 01:00:49.243331 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 01:00:51.295597 sshd[4199]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:51.326202 systemd[1]: sshd@25-10.0.0.73:22-10.0.0.1:56910.service: Deactivated successfully. Apr 14 01:00:51.371333 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 01:00:51.375780 systemd[1]: session-26.scope: Consumed 1.089s CPU time. Apr 14 01:00:51.414385 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Apr 14 01:00:51.437959 systemd-logind[1449]: Removed session 26. Apr 14 01:00:53.080360 kubelet[2591]: E0414 01:00:53.078509 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:56.404819 systemd[1]: Started sshd@26-10.0.0.73:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Apr 14 01:00:57.049115 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:00:57.053012 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:00:57.123982 systemd-logind[1449]: New session 27 of user core. Apr 14 01:00:57.147345 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 01:00:58.091609 kubelet[2591]: E0414 01:00:58.089512 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:00:58.600552 sshd[4234]: pam_unix(sshd:session): session closed for user core Apr 14 01:00:58.628301 systemd[1]: sshd@26-10.0.0.73:22-10.0.0.1:37074.service: Deactivated successfully. Apr 14 01:00:58.670856 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 01:00:58.676580 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Apr 14 01:00:58.677894 systemd-logind[1449]: Removed session 27. Apr 14 01:01:03.729836 systemd[1]: Started sshd@27-10.0.0.73:22-10.0.0.1:37080.service - OpenSSH per-connection server daemon (10.0.0.1:37080). Apr 14 01:01:04.080790 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 37080 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:04.096400 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:04.183589 systemd-logind[1449]: New session 28 of user core. Apr 14 01:01:04.249804 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 01:01:05.954997 sshd[4278]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:06.021498 systemd[1]: sshd@27-10.0.0.73:22-10.0.0.1:37080.service: Deactivated successfully. Apr 14 01:01:06.041651 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 01:01:06.051736 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Apr 14 01:01:06.078757 systemd-logind[1449]: Removed session 28. Apr 14 01:01:11.097346 systemd[1]: Started sshd@28-10.0.0.73:22-10.0.0.1:56050.service - OpenSSH per-connection server daemon (10.0.0.1:56050). Apr 14 01:01:11.314258 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 56050 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:11.316767 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:11.499133 systemd-logind[1449]: New session 29 of user core. Apr 14 01:01:11.563155 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 01:01:13.091316 kubelet[2591]: E0414 01:01:13.090131 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:01:13.809877 sshd[4323]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:13.824159 systemd[1]: sshd@28-10.0.0.73:22-10.0.0.1:56050.service: Deactivated successfully. Apr 14 01:01:13.856810 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 01:01:13.864464 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Apr 14 01:01:13.867390 systemd-logind[1449]: Removed session 29. Apr 14 01:01:18.927611 systemd[1]: Started sshd@29-10.0.0.73:22-10.0.0.1:41580.service - OpenSSH per-connection server daemon (10.0.0.1:41580). Apr 14 01:01:19.285001 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 41580 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:19.286688 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:19.370137 systemd-logind[1449]: New session 30 of user core. Apr 14 01:01:19.493904 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 01:01:21.709454 sshd[4359]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:21.761226 systemd[1]: sshd@29-10.0.0.73:22-10.0.0.1:41580.service: Deactivated successfully. Apr 14 01:01:21.792837 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 01:01:21.797751 systemd[1]: session-30.scope: Consumed 1.369s CPU time. Apr 14 01:01:21.849304 systemd-logind[1449]: Session 30 logged out. Waiting for processes to exit. Apr 14 01:01:21.939309 systemd-logind[1449]: Removed session 30. Apr 14 01:01:26.803096 systemd[1]: Started sshd@30-10.0.0.73:22-10.0.0.1:41890.service - OpenSSH per-connection server daemon (10.0.0.1:41890). Apr 14 01:01:27.207198 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 41890 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:27.252552 sshd[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:27.308337 systemd-logind[1449]: New session 31 of user core. Apr 14 01:01:27.337533 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 01:01:29.008114 sshd[4416]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:29.040544 systemd[1]: sshd@30-10.0.0.73:22-10.0.0.1:41890.service: Deactivated successfully. Apr 14 01:01:29.094385 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 01:01:29.096230 systemd[1]: session-31.scope: Consumed 1.001s CPU time. Apr 14 01:01:29.109946 systemd-logind[1449]: Session 31 logged out. Waiting for processes to exit. Apr 14 01:01:29.121362 systemd-logind[1449]: Removed session 31. Apr 14 01:01:30.076157 kubelet[2591]: E0414 01:01:30.075529 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:01:34.214921 systemd[1]: Started sshd@31-10.0.0.73:22-10.0.0.1:41904.service - OpenSSH per-connection server daemon (10.0.0.1:41904). Apr 14 01:01:34.555241 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 41904 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:34.570535 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:34.653710 systemd-logind[1449]: New session 32 of user core. Apr 14 01:01:34.670877 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 01:01:36.610637 sshd[4450]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:36.752313 systemd[1]: sshd@31-10.0.0.73:22-10.0.0.1:41904.service: Deactivated successfully. Apr 14 01:01:36.814391 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 01:01:36.869873 systemd-logind[1449]: Session 32 logged out. Waiting for processes to exit. Apr 14 01:01:36.891175 systemd-logind[1449]: Removed session 32. Apr 14 01:01:38.095677 kubelet[2591]: E0414 01:01:38.093998 2591 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:01:41.740259 systemd[1]: Started sshd@32-10.0.0.73:22-10.0.0.1:60466.service - OpenSSH per-connection server daemon (10.0.0.1:60466). Apr 14 01:01:42.366839 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 60466 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:42.420072 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:42.600112 systemd-logind[1449]: New session 33 of user core. Apr 14 01:01:42.673644 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 01:01:44.884611 sshd[4504]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:44.955589 systemd[1]: sshd@32-10.0.0.73:22-10.0.0.1:60466.service: Deactivated successfully. Apr 14 01:01:44.991604 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 01:01:45.032410 systemd-logind[1449]: Session 33 logged out. Waiting for processes to exit. Apr 14 01:01:45.043835 systemd-logind[1449]: Removed session 33. Apr 14 01:01:50.112286 systemd[1]: Started sshd@33-10.0.0.73:22-10.0.0.1:60190.service - OpenSSH per-connection server daemon (10.0.0.1:60190). Apr 14 01:01:50.844700 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 60190 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:50.859189 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:51.041558 systemd-logind[1449]: New session 34 of user core. Apr 14 01:01:51.079523 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 01:01:54.037834 sshd[4542]: pam_unix(sshd:session): session closed for user core Apr 14 01:01:54.122094 systemd[1]: sshd@33-10.0.0.73:22-10.0.0.1:60190.service: Deactivated successfully. Apr 14 01:01:54.203497 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 01:01:54.203833 systemd[1]: session-34.scope: Consumed 1.497s CPU time. Apr 14 01:01:54.216654 systemd-logind[1449]: Session 34 logged out. Waiting for processes to exit. Apr 14 01:01:54.233413 systemd-logind[1449]: Removed session 34. Apr 14 01:01:59.126892 systemd[1]: Started sshd@34-10.0.0.73:22-10.0.0.1:38980.service - OpenSSH per-connection server daemon (10.0.0.1:38980). Apr 14 01:01:59.454007 sshd[4596]: Accepted publickey for core from 10.0.0.1 port 38980 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:01:59.466843 sshd[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:01:59.557125 systemd-logind[1449]: New session 35 of user core. Apr 14 01:01:59.581479 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 01:02:02.695578 sshd[4596]: pam_unix(sshd:session): session closed for user core Apr 14 01:02:02.720683 systemd[1]: sshd@34-10.0.0.73:22-10.0.0.1:38980.service: Deactivated successfully. Apr 14 01:02:02.809177 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 01:02:02.809670 systemd[1]: session-35.scope: Consumed 1.248s CPU time. Apr 14 01:02:02.826225 systemd-logind[1449]: Session 35 logged out. Waiting for processes to exit. Apr 14 01:02:02.834860 systemd-logind[1449]: Removed session 35.