Apr 14 00:42:47.579962 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 00:42:47.579995 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:42:47.580010 kernel: BIOS-provided physical RAM map: Apr 14 00:42:47.580020 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 00:42:47.580029 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 00:42:47.580038 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 00:42:47.580048 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 00:42:47.580057 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 00:42:47.580075 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 00:42:47.580096 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 00:42:47.580114 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 00:42:47.580132 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 00:42:47.580149 kernel: NX (Execute Disable) protection: active Apr 14 00:42:47.580157 kernel: APIC: Static calls initialized Apr 14 00:42:47.580167 kernel: SMBIOS 2.8 present. Apr 14 00:42:47.580177 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 00:42:47.580185 kernel: Hypervisor detected: KVM Apr 14 00:42:47.580193 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 00:42:47.580201 kernel: kvm-clock: using sched offset of 9357924637 cycles Apr 14 00:42:47.580210 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 00:42:47.580219 kernel: tsc: Detected 2793.438 MHz processor Apr 14 00:42:47.580228 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 00:42:47.580237 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 00:42:47.580245 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 00:42:47.580255 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 00:42:47.580264 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 00:42:47.580270 kernel: Using GB pages for direct mapping Apr 14 00:42:47.580277 kernel: ACPI: Early table checksum verification disabled Apr 14 00:42:47.580285 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 00:42:47.580294 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580302 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580311 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580318 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 00:42:47.580327 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580336 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580342 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580350 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:42:47.580358 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 00:42:47.580367 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 00:42:47.580375 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 00:42:47.580386 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 00:42:47.580396 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 00:42:47.580405 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 00:42:47.580412 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 00:42:47.580421 kernel: No NUMA configuration found Apr 14 00:42:47.580430 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 00:42:47.580437 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 00:42:47.580978 kernel: Zone ranges: Apr 14 00:42:47.580999 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 00:42:47.581008 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 00:42:47.581017 kernel: Normal empty Apr 14 00:42:47.581026 kernel: Movable zone start for each node Apr 14 00:42:47.581034 kernel: Early memory node ranges Apr 14 00:42:47.581043 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 00:42:47.581051 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 00:42:47.581061 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 00:42:47.581073 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:42:47.581082 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 00:42:47.581089 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 00:42:47.581096 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 00:42:47.581105 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 00:42:47.581113 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 00:42:47.581122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 00:42:47.581130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 00:42:47.581138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 00:42:47.581148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 00:42:47.581157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 00:42:47.581167 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 00:42:47.581176 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 00:42:47.581185 kernel: TSC deadline timer available Apr 14 00:42:47.581195 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 00:42:47.581203 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 00:42:47.581211 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 00:42:47.581219 kernel: kvm-guest: setup PV sched yield Apr 14 00:42:47.581231 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 00:42:47.581239 kernel: Booting paravirtualized kernel on KVM Apr 14 00:42:47.581248 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 00:42:47.581257 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 00:42:47.581265 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 00:42:47.581274 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 00:42:47.581283 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 00:42:47.581293 kernel: kvm-guest: PV spinlocks enabled Apr 14 00:42:47.581304 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 00:42:47.581317 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:42:47.581326 kernel: random: crng init done Apr 14 00:42:47.581333 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 00:42:47.581340 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 00:42:47.581349 kernel: Fallback order for Node 0: 0 Apr 14 00:42:47.581358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 00:42:47.581368 kernel: Policy zone: DMA32 Apr 14 00:42:47.581376 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 00:42:47.581387 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 14 00:42:47.581395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 00:42:47.581403 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 00:42:47.581412 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 00:42:47.581420 kernel: Dynamic Preempt: voluntary Apr 14 00:42:47.581429 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 00:42:47.581439 kernel: rcu: RCU event tracing is enabled. Apr 14 00:42:47.581448 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 00:42:47.581457 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 00:42:47.581469 kernel: Rude variant of Tasks RCU enabled. Apr 14 00:42:47.581479 kernel: Tracing variant of Tasks RCU enabled. Apr 14 00:42:47.581490 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 00:42:47.581499 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 00:42:47.581508 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 00:42:47.581517 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 00:42:47.581525 kernel: Console: colour VGA+ 80x25 Apr 14 00:42:47.581534 kernel: printk: console [ttyS0] enabled Apr 14 00:42:47.581543 kernel: ACPI: Core revision 20230628 Apr 14 00:42:47.581550 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 00:42:47.581561 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 00:42:47.581782 kernel: x2apic enabled Apr 14 00:42:47.581791 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 00:42:47.581798 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 00:42:47.581806 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 00:42:47.581813 kernel: kvm-guest: setup PV IPIs Apr 14 00:42:47.581821 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 00:42:47.581829 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:42:47.581851 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 00:42:47.581859 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 00:42:47.581869 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 00:42:47.581882 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 00:42:47.581891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 00:42:47.581901 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 00:42:47.581909 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 00:42:47.581917 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 00:42:47.581930 kernel: RETBleed: Vulnerable Apr 14 00:42:47.581937 kernel: Speculative Store Bypass: Vulnerable Apr 14 00:42:47.581947 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 00:42:47.581955 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 00:42:47.581964 kernel: active return thunk: its_return_thunk Apr 14 00:42:47.581971 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 00:42:47.581980 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 00:42:47.581989 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 00:42:47.581997 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 00:42:47.582009 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 00:42:47.582016 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 00:42:47.582026 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 00:42:47.582034 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 00:42:47.582042 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 00:42:47.582051 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 00:42:47.582060 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 00:42:47.582071 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 00:42:47.582082 kernel: Freeing SMP alternatives memory: 32K Apr 14 00:42:47.582095 kernel: pid_max: default: 32768 minimum: 301 Apr 14 00:42:47.582104 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 00:42:47.582112 kernel: landlock: Up and running. Apr 14 00:42:47.582119 kernel: SELinux: Initializing. Apr 14 00:42:47.582128 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:42:47.582136 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:42:47.582145 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 00:42:47.582154 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:42:47.582163 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:42:47.582175 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:42:47.582184 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 00:42:47.582193 kernel: signal: max sigframe size: 3632 Apr 14 00:42:47.582203 kernel: rcu: Hierarchical SRCU implementation. Apr 14 00:42:47.582213 kernel: rcu: Max phase no-delay instances is 400. Apr 14 00:42:47.582222 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 00:42:47.582232 kernel: smp: Bringing up secondary CPUs ... Apr 14 00:42:47.582241 kernel: smpboot: x86: Booting SMP configuration: Apr 14 00:42:47.582253 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 00:42:47.582261 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 00:42:47.582269 kernel: smpboot: Max logical packages: 1 Apr 14 00:42:47.582278 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 00:42:47.582288 kernel: devtmpfs: initialized Apr 14 00:42:47.582298 kernel: x86/mm: Memory block size: 128MB Apr 14 00:42:47.582307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 00:42:47.582316 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 00:42:47.582324 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 00:42:47.582332 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 00:42:47.582343 kernel: audit: initializing netlink subsys (disabled) Apr 14 00:42:47.582352 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 00:42:47.582361 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 00:42:47.582371 kernel: audit: type=2000 audit(1776127363.240:1): state=initialized audit_enabled=0 res=1 Apr 14 00:42:47.582380 kernel: cpuidle: using governor menu Apr 14 00:42:47.582388 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 00:42:47.582396 kernel: dca service started, version 1.12.1 Apr 14 00:42:47.582405 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 00:42:47.582413 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 00:42:47.582424 kernel: PCI: Using configuration type 1 for base access Apr 14 00:42:47.582432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 00:42:47.582440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 00:42:47.582449 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 00:42:47.582459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 00:42:47.582466 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 00:42:47.582475 kernel: ACPI: Added _OSI(Module Device) Apr 14 00:42:47.582485 kernel: ACPI: Added _OSI(Processor Device) Apr 14 00:42:47.582496 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 00:42:47.582506 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 00:42:47.582516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 00:42:47.582525 kernel: ACPI: Interpreter enabled Apr 14 00:42:47.582535 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 00:42:47.582544 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 00:42:47.582554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 00:42:47.582563 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 00:42:47.582840 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 00:42:47.582851 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 00:42:47.583086 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 00:42:47.583179 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 00:42:47.583263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 00:42:47.583275 kernel: PCI host bridge to bus 0000:00 Apr 14 00:42:47.583362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 00:42:47.583437 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 00:42:47.583517 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 00:42:47.584195 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 00:42:47.585059 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 00:42:47.585146 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 00:42:47.585224 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 00:42:47.585342 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 00:42:47.586116 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 00:42:47.586483 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 00:42:47.586655 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 00:42:47.586738 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 00:42:47.586808 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 00:42:47.586906 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 00:42:47.587456 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 00:42:47.587692 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 00:42:47.587783 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 00:42:47.588100 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 00:42:47.588192 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 00:42:47.588480 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 00:42:47.588543 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 00:42:47.588893 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 00:42:47.588988 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 00:42:47.589073 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 00:42:47.589494 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 00:42:47.590045 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 00:42:47.590157 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 00:42:47.590246 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 00:42:47.590343 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 00:42:47.590435 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 00:42:47.590492 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 00:42:47.590554 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 00:42:47.590855 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 00:42:47.590870 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 00:42:47.590881 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 00:42:47.590891 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 00:42:47.590905 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 00:42:47.590915 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 00:42:47.590924 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 00:42:47.590931 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 00:42:47.590937 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 00:42:47.590942 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 00:42:47.590948 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 00:42:47.590954 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 00:42:47.590959 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 00:42:47.590967 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 00:42:47.590972 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 00:42:47.590978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 00:42:47.590984 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 00:42:47.590989 kernel: iommu: Default domain type: Translated Apr 14 00:42:47.590995 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 00:42:47.591001 kernel: PCI: Using ACPI for IRQ routing Apr 14 00:42:47.591006 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 00:42:47.591012 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 00:42:47.591019 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 00:42:47.591080 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 00:42:47.591135 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 00:42:47.591190 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 00:42:47.591197 kernel: vgaarb: loaded Apr 14 00:42:47.591203 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 00:42:47.591209 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 00:42:47.591215 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 00:42:47.591220 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 00:42:47.591228 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 00:42:47.591233 kernel: pnp: PnP ACPI init Apr 14 00:42:47.591304 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 00:42:47.591315 kernel: pnp: PnP ACPI: found 6 devices Apr 14 00:42:47.591320 kernel: hrtimer: interrupt took 17336828 ns Apr 14 00:42:47.591326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 00:42:47.591332 kernel: NET: Registered PF_INET protocol family Apr 14 00:42:47.591338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 00:42:47.591346 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 00:42:47.591351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 00:42:47.591357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 00:42:47.591363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 00:42:47.591368 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 00:42:47.591374 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:42:47.591380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:42:47.591386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 00:42:47.591391 kernel: NET: Registered PF_XDP protocol family Apr 14 00:42:47.591447 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 00:42:47.591500 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 00:42:47.591550 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 00:42:47.591785 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 00:42:47.591915 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 00:42:47.591965 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 00:42:47.591975 kernel: PCI: CLS 0 bytes, default 64 Apr 14 00:42:47.591984 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 00:42:47.591997 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:42:47.592004 kernel: Initialise system trusted keyrings Apr 14 00:42:47.592012 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 00:42:47.592021 kernel: Key type asymmetric registered Apr 14 00:42:47.592030 kernel: Asymmetric key parser 'x509' registered Apr 14 00:42:47.592040 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 00:42:47.592050 kernel: io scheduler mq-deadline registered Apr 14 00:42:47.592060 kernel: io scheduler kyber registered Apr 14 00:42:47.592069 kernel: io scheduler bfq registered Apr 14 00:42:47.592080 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 00:42:47.592089 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 00:42:47.592098 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 00:42:47.592107 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 00:42:47.592115 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 00:42:47.592124 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 00:42:47.592133 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 00:42:47.592142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 00:42:47.592151 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 00:42:47.593307 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 00:42:47.593345 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 00:42:47.593444 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 00:42:47.593524 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T00:42:46 UTC (1776127366) Apr 14 00:42:47.594082 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 00:42:47.594101 kernel: intel_pstate: CPU model not supported Apr 14 00:42:47.594111 kernel: NET: Registered PF_INET6 protocol family Apr 14 00:42:47.594120 kernel: Segment Routing with IPv6 Apr 14 00:42:47.594135 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 00:42:47.594144 kernel: NET: Registered PF_PACKET protocol family Apr 14 00:42:47.594153 kernel: Key type dns_resolver registered Apr 14 00:42:47.594162 kernel: IPI shorthand broadcast: enabled Apr 14 00:42:47.594171 kernel: sched_clock: Marking stable (2276042450, 920725343)->(3803653833, -606886040) Apr 14 00:42:47.594181 kernel: registered taskstats version 1 Apr 14 00:42:47.594190 kernel: Loading compiled-in X.509 certificates Apr 14 00:42:47.594199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 00:42:47.594208 kernel: Key type .fscrypt registered Apr 14 00:42:47.594220 kernel: Key type fscrypt-provisioning registered Apr 14 00:42:47.594229 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 00:42:47.594238 kernel: ima: Allocated hash algorithm: sha1 Apr 14 00:42:47.594249 kernel: ima: No architecture policies found Apr 14 00:42:47.594260 kernel: clk: Disabling unused clocks Apr 14 00:42:47.594271 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 00:42:47.594282 kernel: Write protecting the kernel read-only data: 36864k Apr 14 00:42:47.594291 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 00:42:47.594301 kernel: Run /init as init process Apr 14 00:42:47.594312 kernel: with arguments: Apr 14 00:42:47.594321 kernel: /init Apr 14 00:42:47.594331 kernel: with environment: Apr 14 00:42:47.594339 kernel: HOME=/ Apr 14 00:42:47.594347 kernel: TERM=linux Apr 14 00:42:47.594358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:42:47.594371 systemd[1]: Detected virtualization kvm. Apr 14 00:42:47.594382 systemd[1]: Detected architecture x86-64. Apr 14 00:42:47.594395 systemd[1]: Running in initrd. Apr 14 00:42:47.594405 systemd[1]: No hostname configured, using default hostname. Apr 14 00:42:47.594415 systemd[1]: Hostname set to . Apr 14 00:42:47.594426 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:42:47.594435 systemd[1]: Queued start job for default target initrd.target. Apr 14 00:42:47.594443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:42:47.594452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:42:47.594464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 00:42:47.594479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:42:47.594491 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 00:42:47.594515 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 00:42:47.594530 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 00:42:47.594543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 00:42:47.594554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:42:47.594565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:42:47.594877 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:42:47.594887 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:42:47.594896 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:42:47.594906 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:42:47.594916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:42:47.594925 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:42:47.594939 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:42:47.594948 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:42:47.594957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:42:47.594967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:42:47.594977 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:42:47.594987 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:42:47.594996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 00:42:47.595005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:42:47.595016 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 00:42:47.595026 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 00:42:47.595035 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:42:47.595043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:42:47.595224 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 00:42:47.595272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:42:47.595282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 00:42:47.595292 systemd-journald[194]: Journal started Apr 14 00:42:47.595317 systemd-journald[194]: Runtime Journal (/run/log/journal/0e0d0191c95d4122bee3d284c8944d44) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:42:47.626488 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:42:47.628359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:42:47.973025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 00:42:47.973095 kernel: Bridge firewalling registered Apr 14 00:42:47.628999 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 00:42:47.755725 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 00:42:47.977122 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 00:42:47.977952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:42:47.993047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:42:48.022102 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:42:48.035313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:42:48.047177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:42:48.072314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:42:48.114927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:42:48.127367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:42:48.146314 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:42:48.176098 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 00:42:48.196039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:42:48.228514 dracut-cmdline[226]: dracut-dracut-053 Apr 14 00:42:48.228514 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:42:48.209456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:42:48.228883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:42:48.234393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:42:48.356102 systemd-resolved[245]: Positive Trust Anchors: Apr 14 00:42:48.357896 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:42:48.357932 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:42:48.361379 systemd-resolved[245]: Defaulting to hostname 'linux'. Apr 14 00:42:48.363951 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:42:48.400037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:42:48.529860 kernel: SCSI subsystem initialized Apr 14 00:42:48.556074 kernel: Loading iSCSI transport class v2.0-870. Apr 14 00:42:48.611447 kernel: iscsi: registered transport (tcp) Apr 14 00:42:48.662119 kernel: iscsi: registered transport (qla4xxx) Apr 14 00:42:48.662267 kernel: QLogic iSCSI HBA Driver Apr 14 00:42:48.824500 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 00:42:48.858043 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 00:42:49.000875 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 00:42:49.005957 kernel: device-mapper: uevent: version 1.0.3 Apr 14 00:42:49.012817 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 00:42:49.144105 kernel: raid6: avx512x4 gen() 32255 MB/s Apr 14 00:42:49.163168 kernel: raid6: avx512x2 gen() 24404 MB/s Apr 14 00:42:49.180769 kernel: raid6: avx512x1 gen() 23308 MB/s Apr 14 00:42:49.199107 kernel: raid6: avx2x4 gen() 22212 MB/s Apr 14 00:42:49.217354 kernel: raid6: avx2x2 gen() 22075 MB/s Apr 14 00:42:49.238169 kernel: raid6: avx2x1 gen() 15291 MB/s Apr 14 00:42:49.238318 kernel: raid6: using algorithm avx512x4 gen() 32255 MB/s Apr 14 00:42:49.259922 kernel: raid6: .... xor() 8450 MB/s, rmw enabled Apr 14 00:42:49.260110 kernel: raid6: using avx512x2 recovery algorithm Apr 14 00:42:49.317726 kernel: xor: automatically using best checksumming function avx Apr 14 00:42:50.034164 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 00:42:50.079121 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:42:50.121125 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:42:50.149213 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 00:42:50.158208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:42:50.233078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 00:42:50.292462 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Apr 14 00:42:50.459490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:42:50.499864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:42:50.617337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:42:50.645375 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 00:42:50.739276 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 00:42:50.743941 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 00:42:50.755403 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:42:50.776144 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 00:42:50.769367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:42:50.783194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:42:50.799367 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 00:42:50.811965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 00:42:50.812059 kernel: GPT:9289727 != 19775487 Apr 14 00:42:50.812067 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 00:42:50.815926 kernel: GPT:9289727 != 19775487 Apr 14 00:42:50.815998 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 00:42:50.819894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:42:50.828679 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 00:42:50.846164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:42:50.850196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:42:50.859710 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:42:50.876403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:42:50.877198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:42:50.888564 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:42:50.926378 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (476) Apr 14 00:42:50.926499 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Apr 14 00:42:50.952151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:42:50.969104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:42:50.982038 kernel: libata version 3.00 loaded. Apr 14 00:42:50.982063 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 00:42:50.994868 kernel: AES CTR mode by8 optimization enabled Apr 14 00:42:50.996826 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 00:42:50.997155 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 00:42:51.001809 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 00:42:51.002111 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 00:42:51.013176 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 00:42:51.409164 kernel: scsi host0: ahci Apr 14 00:42:51.410167 kernel: scsi host1: ahci Apr 14 00:42:51.410681 kernel: scsi host2: ahci Apr 14 00:42:51.410845 kernel: scsi host3: ahci Apr 14 00:42:51.410959 kernel: scsi host4: ahci Apr 14 00:42:51.411067 kernel: scsi host5: ahci Apr 14 00:42:51.411167 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 00:42:51.411179 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 00:42:51.411197 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 00:42:51.411208 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 00:42:51.411220 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 00:42:51.411231 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 00:42:51.411243 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 00:42:51.411255 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 00:42:51.411266 kernel: ata3.00: applying bridge limits Apr 14 00:42:51.411276 kernel: ata3.00: configured for UDMA/100 Apr 14 00:42:51.411285 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 00:42:51.411300 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 00:42:51.411311 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 00:42:51.411521 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 00:42:51.033021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 00:42:51.447212 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 00:42:51.447255 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 00:42:51.423781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:42:51.469292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 00:42:51.511204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 00:42:51.534397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:42:51.633072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 00:42:51.706078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:42:51.740790 disk-uuid[560]: Primary Header is updated. Apr 14 00:42:51.740790 disk-uuid[560]: Secondary Entries is updated. Apr 14 00:42:51.740790 disk-uuid[560]: Secondary Header is updated. Apr 14 00:42:51.743463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:42:51.776069 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:42:51.780016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:42:51.822820 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 00:42:51.823218 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 00:42:51.859087 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 00:42:52.792056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:42:52.797507 disk-uuid[562]: The operation has completed successfully. Apr 14 00:42:52.910183 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 00:42:52.911112 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 00:42:52.973178 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 00:42:52.991706 sh[592]: Success Apr 14 00:42:53.147102 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 00:42:53.289132 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 00:42:53.297498 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 00:42:53.332502 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 00:42:53.381917 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 00:42:53.382075 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:42:53.393790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 00:42:53.393947 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 00:42:53.400787 kernel: BTRFS info (device dm-0): using free space tree Apr 14 00:42:53.467349 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 00:42:53.487662 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 00:42:53.551234 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 00:42:53.576906 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 00:42:53.623000 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:42:53.623149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:42:53.623164 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:42:53.677923 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:42:53.731314 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 00:42:53.747264 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:42:53.816113 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 00:42:53.847362 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 00:42:54.110859 ignition[699]: Ignition 2.19.0 Apr 14 00:42:54.111877 ignition[699]: Stage: fetch-offline Apr 14 00:42:54.113240 ignition[699]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:42:54.113268 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:42:54.113447 ignition[699]: parsed url from cmdline: "" Apr 14 00:42:54.113451 ignition[699]: no config URL provided Apr 14 00:42:54.113458 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 00:42:54.113467 ignition[699]: no config at "/usr/lib/ignition/user.ign" Apr 14 00:42:54.157121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:42:54.113508 ignition[699]: op(1): [started] loading QEMU firmware config module Apr 14 00:42:54.113513 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 00:42:54.178772 ignition[699]: op(1): [finished] loading QEMU firmware config module Apr 14 00:42:54.202937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:42:54.325819 ignition[699]: parsing config with SHA512: 4b2510345ba857de35de017ab9c4aa2afb47d79bc067c4ca41b5776fc847dd69ded70b053eab17c8c11424b5a0f5674dfbcf9f696c7e833c35391300daaabe7d Apr 14 00:42:54.342917 unknown[699]: fetched base config from "system" Apr 14 00:42:54.343499 ignition[699]: fetch-offline: fetch-offline passed Apr 14 00:42:54.342933 unknown[699]: fetched user config from "qemu" Apr 14 00:42:54.350110 ignition[699]: Ignition finished successfully Apr 14 00:42:54.343366 systemd-networkd[780]: lo: Link UP Apr 14 00:42:54.343372 systemd-networkd[780]: lo: Gained carrier Apr 14 00:42:54.348387 systemd-networkd[780]: Enumeration completed Apr 14 00:42:54.349317 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:42:54.353092 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:42:54.353095 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:42:54.359227 systemd-networkd[780]: eth0: Link UP Apr 14 00:42:54.359232 systemd-networkd[780]: eth0: Gained carrier Apr 14 00:42:54.359241 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:42:54.362366 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:42:54.376170 systemd[1]: Reached target network.target - Network. Apr 14 00:42:54.381690 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 00:42:54.405335 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 00:42:54.414386 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:42:54.594912 ignition[783]: Ignition 2.19.0 Apr 14 00:42:54.595559 ignition[783]: Stage: kargs Apr 14 00:42:54.595873 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:42:54.595883 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:42:54.614380 ignition[783]: kargs: kargs passed Apr 14 00:42:54.614453 ignition[783]: Ignition finished successfully Apr 14 00:42:54.627960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 00:42:54.664329 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 00:42:54.729121 ignition[792]: Ignition 2.19.0 Apr 14 00:42:54.737295 ignition[792]: Stage: disks Apr 14 00:42:54.737828 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:42:54.737853 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:42:54.756922 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 00:42:54.742565 ignition[792]: disks: disks passed Apr 14 00:42:54.771085 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 00:42:54.742809 ignition[792]: Ignition finished successfully Apr 14 00:42:54.773504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:42:54.773563 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:42:54.774505 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:42:54.774553 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:42:54.871245 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 00:42:54.918321 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 00:42:54.936214 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 00:42:55.037648 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 00:42:55.583342 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 00:42:55.609123 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 00:42:55.622430 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 00:42:55.658223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:42:55.688509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 00:42:55.707228 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 14 00:42:55.701466 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 00:42:55.701530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 00:42:55.727125 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:42:55.727156 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:42:55.727180 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:42:55.702980 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:42:55.761977 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:42:55.769902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:42:55.788487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 00:42:55.809359 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 00:42:56.012186 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 00:42:56.050128 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 14 00:42:56.098264 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 00:42:56.138506 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 00:42:56.231277 systemd-networkd[780]: eth0: Gained IPv6LL Apr 14 00:42:56.714715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 00:42:56.745529 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 00:42:56.767312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 00:42:56.813551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 00:42:56.834211 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:42:57.045200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 00:42:57.056064 ignition[924]: INFO : Ignition 2.19.0 Apr 14 00:42:57.056064 ignition[924]: INFO : Stage: mount Apr 14 00:42:57.056064 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:42:57.056064 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:42:57.056064 ignition[924]: INFO : mount: mount passed Apr 14 00:42:57.056064 ignition[924]: INFO : Ignition finished successfully Apr 14 00:42:57.082315 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 00:42:57.108228 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 00:42:57.150396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:42:57.236109 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 14 00:42:57.250386 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:42:57.250538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:42:57.262087 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:42:57.293070 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:42:57.312395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:42:57.424536 ignition[953]: INFO : Ignition 2.19.0 Apr 14 00:42:57.424536 ignition[953]: INFO : Stage: files Apr 14 00:42:57.433098 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:42:57.433098 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:42:57.444432 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Apr 14 00:42:57.449152 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 00:42:57.449152 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 00:42:57.511363 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 00:42:57.520987 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 00:42:57.543863 unknown[953]: wrote ssh authorized keys file for user: core Apr 14 00:42:57.551805 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 00:42:57.558435 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:42:57.558435 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 00:42:57.759796 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 00:42:58.084068 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:42:58.084068 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:42:58.113908 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 00:42:58.296740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 00:42:58.959238 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:42:58.959238 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 00:42:59.040531 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 00:42:59.192803 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:42:59.209242 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:42:59.209242 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 00:42:59.209242 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 00:42:59.209242 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 00:42:59.209242 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:42:59.209242 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:42:59.209242 ignition[953]: INFO : files: files passed Apr 14 00:42:59.209242 ignition[953]: INFO : Ignition finished successfully Apr 14 00:42:59.230498 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 00:42:59.321348 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 00:42:59.344245 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 00:42:59.378979 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 00:42:59.379495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 00:42:59.403865 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 00:42:59.423163 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:42:59.423163 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:42:59.446248 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:42:59.536340 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:42:59.580467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 00:42:59.625215 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 00:42:59.699166 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 00:42:59.700021 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 00:42:59.720291 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 00:42:59.728852 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 00:42:59.744115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 00:42:59.774521 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 00:42:59.820845 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:42:59.842524 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 00:42:59.883980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:42:59.892618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:42:59.907368 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 00:42:59.922047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 00:42:59.922483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:42:59.946828 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 00:42:59.959229 systemd[1]: Stopped target basic.target - Basic System. Apr 14 00:43:00.015240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 00:43:00.042081 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:43:00.064046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 00:43:00.069013 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 00:43:00.083977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:43:00.096136 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 00:43:00.108240 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 00:43:00.109503 systemd[1]: Stopped target swap.target - Swaps. Apr 14 00:43:00.111541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 00:43:00.111803 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:43:00.133292 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:43:00.137928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:43:00.150236 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 00:43:00.151698 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:43:00.163103 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 00:43:00.164356 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 00:43:00.191118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 00:43:00.192318 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:43:00.200915 systemd[1]: Stopped target paths.target - Path Units. Apr 14 00:43:00.214328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 00:43:00.216453 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:43:00.233227 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 00:43:00.253172 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 00:43:00.273226 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 00:43:00.273360 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:43:00.290067 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 00:43:00.290446 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:43:00.300083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 00:43:00.300241 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:43:00.309444 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 00:43:00.311244 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 00:43:00.410419 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 00:43:00.428167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 00:43:00.432236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 00:43:00.432871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:43:00.442808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 00:43:00.442946 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:43:00.483143 ignition[1009]: INFO : Ignition 2.19.0 Apr 14 00:43:00.483143 ignition[1009]: INFO : Stage: umount Apr 14 00:43:00.483143 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:43:00.483143 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:43:00.483143 ignition[1009]: INFO : umount: umount passed Apr 14 00:43:00.483143 ignition[1009]: INFO : Ignition finished successfully Apr 14 00:43:00.469092 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 00:43:00.469942 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 00:43:00.479495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 00:43:00.484434 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 00:43:00.485236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 00:43:00.502761 systemd[1]: Stopped target network.target - Network. Apr 14 00:43:00.527104 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 00:43:00.527318 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 00:43:00.538230 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 00:43:00.538523 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 00:43:00.570049 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 00:43:00.570190 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 00:43:00.584978 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 00:43:00.588232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 00:43:00.591257 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 00:43:00.608008 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 00:43:00.659493 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 00:43:00.662348 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 14 00:43:00.663204 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 00:43:00.681242 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 00:43:00.683105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 00:43:00.687750 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 00:43:00.691814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 00:43:00.711041 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 00:43:00.711103 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:43:00.738236 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 00:43:00.739008 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 00:43:00.775495 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 00:43:00.784509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 00:43:00.785128 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:43:00.808033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:43:00.808460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:43:00.814897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 00:43:00.814971 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 00:43:00.831306 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 00:43:00.831402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:43:00.851096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:43:00.943909 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 00:43:00.947513 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:43:00.980931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 00:43:00.981252 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 00:43:01.001968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 00:43:01.002018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:43:01.026292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 00:43:01.027277 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:43:01.055667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 00:43:01.055781 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 00:43:01.079060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:43:01.079397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:43:01.118187 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 00:43:01.136181 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 00:43:01.137106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:43:01.137280 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 00:43:01.138199 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:43:01.175492 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 00:43:01.191358 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:43:01.223869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:43:01.223938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:43:01.245412 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 00:43:01.247289 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 00:43:01.286407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 00:43:01.287202 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 00:43:01.304170 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 00:43:01.343109 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 00:43:01.370952 systemd[1]: Switching root. Apr 14 00:43:01.423450 systemd-journald[194]: Journal stopped Apr 14 00:43:04.458188 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 00:43:04.458418 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 00:43:04.458477 kernel: SELinux: policy capability open_perms=1 Apr 14 00:43:04.458535 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 00:43:04.458551 kernel: SELinux: policy capability always_check_network=0 Apr 14 00:43:04.458563 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 00:43:04.458796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 00:43:04.458967 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 00:43:04.458978 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 00:43:04.458991 kernel: audit: type=1403 audit(1776127381.648:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 00:43:04.459005 systemd[1]: Successfully loaded SELinux policy in 75.313ms. Apr 14 00:43:04.459038 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.977ms. Apr 14 00:43:04.459054 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:43:04.459066 systemd[1]: Detected virtualization kvm. Apr 14 00:43:04.459077 systemd[1]: Detected architecture x86-64. Apr 14 00:43:04.459090 systemd[1]: Detected first boot. Apr 14 00:43:04.459101 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:43:04.459113 zram_generator::config[1054]: No configuration found. Apr 14 00:43:04.459130 systemd[1]: Populated /etc with preset unit settings. Apr 14 00:43:04.459144 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 00:43:04.459156 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 00:43:04.459169 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 00:43:04.459181 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 00:43:04.459193 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 00:43:04.459204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 00:43:04.459219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 00:43:04.459230 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 00:43:04.459244 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 00:43:04.459255 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 00:43:04.459268 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 00:43:04.459279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:43:04.459294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:43:04.459305 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 00:43:04.459317 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 00:43:04.459328 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 00:43:04.459340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:43:04.459354 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 00:43:04.459365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:43:04.459377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 00:43:04.459390 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 00:43:04.459401 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 00:43:04.459414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 00:43:04.459424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:43:04.459435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:43:04.459449 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:43:04.459460 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:43:04.459473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 00:43:04.459483 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 00:43:04.459496 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:43:04.459508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:43:04.459520 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:43:04.459533 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 00:43:04.459546 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 00:43:04.459562 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 00:43:04.459742 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 00:43:04.459801 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:04.459816 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 00:43:04.459829 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 00:43:04.459844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 00:43:04.459857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 00:43:04.459868 systemd[1]: Reached target machines.target - Containers. Apr 14 00:43:04.459891 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 00:43:04.459903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:43:04.459915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:43:04.459928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 00:43:04.459939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:43:04.459952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:43:04.459967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:43:04.459978 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 00:43:04.459991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:43:04.460008 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 00:43:04.460022 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 00:43:04.460036 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 00:43:04.460053 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 00:43:04.460067 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 00:43:04.460081 kernel: ACPI: bus type drm_connector registered Apr 14 00:43:04.460097 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:43:04.460111 kernel: fuse: init (API version 7.39) Apr 14 00:43:04.460124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:43:04.460140 kernel: loop: module loaded Apr 14 00:43:04.460152 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 00:43:04.460166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 00:43:04.460208 systemd-journald[1138]: Collecting audit messages is disabled. Apr 14 00:43:04.460236 systemd-journald[1138]: Journal started Apr 14 00:43:04.460264 systemd-journald[1138]: Runtime Journal (/run/log/journal/0e0d0191c95d4122bee3d284c8944d44) is 6.0M, max 48.4M, 42.3M free. Apr 14 00:43:03.228944 systemd[1]: Queued start job for default target multi-user.target. Apr 14 00:43:03.305508 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 00:43:03.307390 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 00:43:03.311180 systemd[1]: systemd-journald.service: Consumed 1.249s CPU time. Apr 14 00:43:04.467718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:43:04.480762 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 00:43:04.480876 systemd[1]: Stopped verity-setup.service. Apr 14 00:43:04.497426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:04.497637 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:43:04.506311 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 00:43:04.511909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 00:43:04.518345 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 00:43:04.522401 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 00:43:04.528431 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 00:43:04.538913 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 00:43:04.553966 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 00:43:04.617092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:43:04.631537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 00:43:04.632469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 00:43:04.642275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:43:04.643193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:43:04.648308 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:43:04.648971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:43:04.657355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:43:04.657564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:43:04.670261 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 00:43:04.670965 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 00:43:04.680229 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:43:04.682051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:43:04.690470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:43:04.697908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 00:43:04.705910 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 00:43:04.713482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:43:04.744202 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 00:43:04.768459 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 00:43:04.784958 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 00:43:04.796997 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 00:43:04.798923 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:43:04.808023 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 00:43:04.832138 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 00:43:04.850873 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 00:43:04.866328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:43:04.871412 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 00:43:04.895397 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 00:43:04.903049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:43:04.921192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 00:43:04.928706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:43:04.938121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:43:04.948535 systemd-journald[1138]: Time spent on flushing to /var/log/journal/0e0d0191c95d4122bee3d284c8944d44 is 39.908ms for 953 entries. Apr 14 00:43:04.948535 systemd-journald[1138]: System Journal (/var/log/journal/0e0d0191c95d4122bee3d284c8944d44) is 8.0M, max 195.6M, 187.6M free. Apr 14 00:43:05.026064 systemd-journald[1138]: Received client request to flush runtime journal. Apr 14 00:43:04.959356 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 00:43:04.984204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:43:05.010400 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 00:43:05.023294 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 00:43:05.034915 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 00:43:05.056415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 00:43:05.121301 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 00:43:05.128354 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 00:43:05.156712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:43:05.173694 kernel: loop0: detected capacity change from 0 to 228704 Apr 14 00:43:05.183380 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 00:43:05.203354 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 14 00:43:05.203960 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Apr 14 00:43:05.204751 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 00:43:05.214020 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:43:05.232976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 00:43:05.234356 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 00:43:05.242237 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 00:43:05.243562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 00:43:05.253949 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 00:43:05.302682 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 00:43:05.319322 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 00:43:05.343558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:43:05.378214 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 14 00:43:05.378378 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 14 00:43:05.386943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:43:05.408800 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 00:43:05.534333 kernel: loop3: detected capacity change from 0 to 228704 Apr 14 00:43:05.620029 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 00:43:05.756963 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 00:43:05.828961 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 00:43:05.829532 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 14 00:43:05.848505 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 00:43:05.848553 systemd[1]: Reloading... Apr 14 00:43:05.935963 zram_generator::config[1224]: No configuration found. Apr 14 00:43:06.242271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:43:06.354546 systemd[1]: Reloading finished in 505 ms. Apr 14 00:43:06.409472 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 00:43:06.443100 systemd[1]: Starting ensure-sysext.service... Apr 14 00:43:06.454158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:43:06.478327 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 00:43:06.478992 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 14 00:43:06.479206 systemd[1]: Reloading... Apr 14 00:43:06.519438 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:43:06.520171 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:43:06.522197 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:43:06.522463 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 14 00:43:06.522516 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 14 00:43:06.530356 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:43:06.530366 systemd-tmpfiles[1259]: Skipping /boot Apr 14 00:43:06.548177 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:43:06.548198 systemd-tmpfiles[1259]: Skipping /boot Apr 14 00:43:06.562035 zram_generator::config[1287]: No configuration found. Apr 14 00:43:06.877215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:43:06.986430 systemd[1]: Reloading finished in 506 ms. Apr 14 00:43:07.010845 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 00:43:07.033028 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 00:43:07.060913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:43:07.123143 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:43:07.154936 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 00:43:07.221051 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 00:43:07.247240 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:43:07.254326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:43:07.265132 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 00:43:07.284016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 00:43:07.294786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.295249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:43:07.313793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:43:07.324285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:43:07.338987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:43:07.342219 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Apr 14 00:43:07.344415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:43:07.345008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.348874 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 00:43:07.354208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:43:07.354458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:43:07.366492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:43:07.367528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:43:07.390970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 00:43:07.402340 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:43:07.402538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:43:07.411988 augenrules[1355]: No rules Apr 14 00:43:07.417791 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:43:07.428075 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 00:43:07.435386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:43:07.456320 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.457390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:43:07.467226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:43:07.478510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:43:07.491115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:43:07.495171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:43:07.504300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:43:07.515812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 00:43:07.521358 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.528100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:43:07.531746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:43:07.540438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:43:07.546141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:43:07.619963 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:43:07.620166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:43:07.645301 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 00:43:07.671254 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 00:43:07.704074 systemd[1]: Finished ensure-sysext.service. Apr 14 00:43:07.726154 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 00:43:07.736746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1380) Apr 14 00:43:07.737887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.738233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:43:07.751842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:43:07.758044 systemd-resolved[1335]: Positive Trust Anchors: Apr 14 00:43:07.758064 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:43:07.758101 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:43:07.760292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:43:07.766928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:43:07.783107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:43:07.787276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:43:07.792117 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 00:43:07.796333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:43:07.796374 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:43:07.797847 systemd-resolved[1335]: Defaulting to hostname 'linux'. Apr 14 00:43:07.799199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:43:07.799441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:43:07.804636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:43:07.815271 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:43:07.816079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:43:07.821126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:43:07.822538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:43:07.857384 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:43:07.857776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:43:07.867810 systemd-networkd[1389]: lo: Link UP Apr 14 00:43:07.868341 systemd-networkd[1389]: lo: Gained carrier Apr 14 00:43:07.873471 systemd-networkd[1389]: Enumeration completed Apr 14 00:43:07.876288 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:43:07.877688 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:43:07.877857 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:43:07.881864 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:43:07.882203 systemd-networkd[1389]: eth0: Link UP Apr 14 00:43:07.882237 systemd-networkd[1389]: eth0: Gained carrier Apr 14 00:43:07.882308 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:43:07.882474 systemd[1]: Reached target network.target - Network. Apr 14 00:43:07.890208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:43:07.909759 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 00:43:07.909935 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:43:07.910178 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 00:43:07.914879 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:43:07.915229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:43:07.917904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:43:07.929696 kernel: ACPI: button: Power Button [PWRF] Apr 14 00:43:07.934087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 00:43:07.967682 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 00:43:07.967912 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 00:43:07.974713 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 00:43:07.984360 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 00:43:07.987241 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 00:43:07.996955 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 00:43:07.997051 systemd-timesyncd[1407]: Initial clock synchronization to Tue 2026-04-14 00:43:08.274104 UTC. Apr 14 00:43:07.999819 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 00:43:08.066280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:43:08.086675 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 00:43:08.419786 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 00:43:08.880139 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 00:43:08.944438 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 00:43:08.955493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:43:08.976871 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:43:09.032942 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 00:43:09.038086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:43:09.042948 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:43:09.047330 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 00:43:09.051568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 00:43:09.060015 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 00:43:09.069290 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 00:43:09.077258 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 00:43:09.086323 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 00:43:09.086423 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:43:09.092250 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:43:09.100917 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 00:43:09.112790 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 00:43:09.134885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 00:43:09.157203 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 00:43:09.168989 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 00:43:09.170833 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:43:09.181587 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:43:09.186281 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:43:09.186353 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:43:09.189415 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:43:09.190567 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 00:43:09.200140 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 00:43:09.257573 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 00:43:09.271864 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 00:43:09.276977 jq[1438]: false Apr 14 00:43:09.276832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 00:43:09.279215 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 00:43:09.288822 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 00:43:09.312109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 00:43:09.320991 extend-filesystems[1439]: Found loop3 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found loop4 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found loop5 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found sr0 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda1 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda2 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda3 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found usr Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda4 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda6 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda7 Apr 14 00:43:09.320991 extend-filesystems[1439]: Found vda9 Apr 14 00:43:09.320991 extend-filesystems[1439]: Checking size of /dev/vda9 Apr 14 00:43:09.358345 dbus-daemon[1437]: [system] SELinux support is enabled Apr 14 00:43:09.388853 extend-filesystems[1439]: Resized partition /dev/vda9 Apr 14 00:43:09.409797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 00:43:09.326250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 00:43:09.410130 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Apr 14 00:43:09.352162 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 00:43:09.359205 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 00:43:09.360844 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 00:43:09.362949 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 00:43:09.387414 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 00:43:09.399143 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 00:43:09.414459 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 00:43:09.440933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 00:43:09.446022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 00:43:09.452791 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1367) Apr 14 00:43:09.452334 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 00:43:09.453565 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 00:43:09.467277 update_engine[1452]: I20260414 00:43:09.463665 1452 main.cc:92] Flatcar Update Engine starting Apr 14 00:43:09.475292 update_engine[1452]: I20260414 00:43:09.468852 1452 update_check_scheduler.cc:74] Next update check in 9m54s Apr 14 00:43:09.488467 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 00:43:09.490928 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 00:43:09.497562 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 00:43:09.513834 jq[1456]: true Apr 14 00:43:09.536324 jq[1471]: true Apr 14 00:43:09.556886 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 00:43:09.598307 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 00:43:09.598307 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 00:43:09.598307 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 00:43:09.630133 tar[1462]: linux-amd64/LICENSE Apr 14 00:43:09.630133 tar[1462]: linux-amd64/helm Apr 14 00:43:09.601690 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 00:43:09.637161 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Apr 14 00:43:09.602317 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 00:43:09.611305 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 00:43:09.611320 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 00:43:09.615776 systemd-logind[1448]: New seat seat0. Apr 14 00:43:09.623801 systemd[1]: Started update-engine.service - Update Engine. Apr 14 00:43:09.630341 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 00:43:09.642351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 00:43:09.642982 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 00:43:09.648871 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 00:43:09.648973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 00:43:09.673488 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 00:43:09.797156 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Apr 14 00:43:09.802952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 00:43:09.814126 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 00:43:09.860119 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 00:43:09.864319 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 14 00:43:09.898111 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 00:43:09.918823 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 00:43:09.940276 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 00:43:09.969156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:43:09.998219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 00:43:10.094455 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 00:43:10.095488 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 00:43:10.103029 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 00:43:10.114519 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 00:43:10.135401 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 00:43:10.180849 containerd[1465]: time="2026-04-14T00:43:10.178163265Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 00:43:10.216439 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 00:43:10.280159 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 00:43:10.325532 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 00:43:10.326224 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 00:43:10.330777 containerd[1465]: time="2026-04-14T00:43:10.330084379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.342296 containerd[1465]: time="2026-04-14T00:43:10.342029446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:43:10.342296 containerd[1465]: time="2026-04-14T00:43:10.342126857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 00:43:10.342296 containerd[1465]: time="2026-04-14T00:43:10.342150928Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 00:43:10.343230 containerd[1465]: time="2026-04-14T00:43:10.343101898Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 00:43:10.343230 containerd[1465]: time="2026-04-14T00:43:10.343175846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.343280 containerd[1465]: time="2026-04-14T00:43:10.343247398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:43:10.343280 containerd[1465]: time="2026-04-14T00:43:10.343269148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344006137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344069652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344086621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344099896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344229195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.344783 containerd[1465]: time="2026-04-14T00:43:10.344467440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:43:10.346418 containerd[1465]: time="2026-04-14T00:43:10.344811014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:43:10.346418 containerd[1465]: time="2026-04-14T00:43:10.344877321Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 00:43:10.346418 containerd[1465]: time="2026-04-14T00:43:10.344969282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 00:43:10.346418 containerd[1465]: time="2026-04-14T00:43:10.345018567Z" level=info msg="metadata content store policy set" policy=shared Apr 14 00:43:10.347462 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.356563849Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.357025666Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.357192214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.357505985Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.358197107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.358429961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.359233138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.359493804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.359515246Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 00:43:10.359852 containerd[1465]: time="2026-04-14T00:43:10.359531644Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 00:43:10.361292 containerd[1465]: time="2026-04-14T00:43:10.361035760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.361660 containerd[1465]: time="2026-04-14T00:43:10.361633794Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.362926 containerd[1465]: time="2026-04-14T00:43:10.362537403Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.365818 containerd[1465]: time="2026-04-14T00:43:10.365061755Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.367054 containerd[1465]: time="2026-04-14T00:43:10.366844719Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.367702 containerd[1465]: time="2026-04-14T00:43:10.367546826Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.370509 containerd[1465]: time="2026-04-14T00:43:10.369986430Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.371470 containerd[1465]: time="2026-04-14T00:43:10.371100461Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 00:43:10.372833 containerd[1465]: time="2026-04-14T00:43:10.372683339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.373558 containerd[1465]: time="2026-04-14T00:43:10.373232332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.374200 containerd[1465]: time="2026-04-14T00:43:10.374090663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.374923 containerd[1465]: time="2026-04-14T00:43:10.374874207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375022 containerd[1465]: time="2026-04-14T00:43:10.375014564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375153 containerd[1465]: time="2026-04-14T00:43:10.375055847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375338047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375469868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375491760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375527450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375542618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375557536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.375773 containerd[1465]: time="2026-04-14T00:43:10.375572442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.376050 containerd[1465]: time="2026-04-14T00:43:10.376011987Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 00:43:10.376107 containerd[1465]: time="2026-04-14T00:43:10.376096515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.376157 containerd[1465]: time="2026-04-14T00:43:10.376147068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.376203 containerd[1465]: time="2026-04-14T00:43:10.376193376Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 00:43:10.376279 containerd[1465]: time="2026-04-14T00:43:10.376271711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 00:43:10.376935 containerd[1465]: time="2026-04-14T00:43:10.376854614Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377027188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377046291Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377054375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377067660Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377077412Z" level=info msg="NRI interface is disabled by configuration." Apr 14 00:43:10.377471 containerd[1465]: time="2026-04-14T00:43:10.377085444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.378228106Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.378355331Z" level=info msg="Connect containerd service" Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.378410105Z" level=info msg="using legacy CRI server" Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.378419322Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.378817665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 00:43:10.379843 containerd[1465]: time="2026-04-14T00:43:10.379782440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:43:10.380945 containerd[1465]: time="2026-04-14T00:43:10.380254297Z" level=info msg="Start subscribing containerd event" Apr 14 00:43:10.380945 containerd[1465]: time="2026-04-14T00:43:10.380366139Z" level=info msg="Start recovering state" Apr 14 00:43:10.380945 containerd[1465]: time="2026-04-14T00:43:10.380874829Z" level=info msg="Start event monitor" Apr 14 00:43:10.381043 containerd[1465]: time="2026-04-14T00:43:10.380988886Z" level=info msg="Start snapshots syncer" Apr 14 00:43:10.381043 containerd[1465]: time="2026-04-14T00:43:10.380999421Z" level=info msg="Start cni network conf syncer for default" Apr 14 00:43:10.381043 containerd[1465]: time="2026-04-14T00:43:10.381005025Z" level=info msg="Start streaming server" Apr 14 00:43:10.382082 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 00:43:10.382372 containerd[1465]: time="2026-04-14T00:43:10.382196926Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 00:43:10.382780 containerd[1465]: time="2026-04-14T00:43:10.382733300Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 00:43:10.384206 containerd[1465]: time="2026-04-14T00:43:10.383987144Z" level=info msg="containerd successfully booted in 0.209545s" Apr 14 00:43:10.388791 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 00:43:10.411410 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 00:43:10.439077 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 00:43:10.445227 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 00:43:11.472211 tar[1462]: linux-amd64/README.md Apr 14 00:43:11.623444 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 00:43:14.691335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:43:14.701444 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 00:43:14.758731 systemd[1]: Startup finished in 2.556s (kernel) + 14.740s (initrd) + 13.180s (userspace) = 30.477s. Apr 14 00:43:14.897023 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:43:19.081837 kubelet[1550]: E0414 00:43:19.077294 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:43:19.096362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:43:19.097111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:43:19.099282 systemd[1]: kubelet.service: Consumed 4.052s CPU time. Apr 14 00:43:19.155547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 00:43:19.244251 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Apr 14 00:43:19.650390 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:43:19.669553 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:19.746832 systemd-logind[1448]: New session 1 of user core. Apr 14 00:43:19.751920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 00:43:19.768787 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 00:43:19.829836 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 00:43:19.870852 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 00:43:19.976176 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 00:43:20.429252 systemd[1567]: Queued start job for default target default.target. Apr 14 00:43:20.439360 systemd[1567]: Created slice app.slice - User Application Slice. Apr 14 00:43:20.439387 systemd[1567]: Reached target paths.target - Paths. Apr 14 00:43:20.439398 systemd[1567]: Reached target timers.target - Timers. Apr 14 00:43:20.455512 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 00:43:20.551444 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 00:43:20.552916 systemd[1567]: Reached target sockets.target - Sockets. Apr 14 00:43:20.552940 systemd[1567]: Reached target basic.target - Basic System. Apr 14 00:43:20.553060 systemd[1567]: Reached target default.target - Main User Target. Apr 14 00:43:20.553096 systemd[1567]: Startup finished in 547ms. Apr 14 00:43:20.555539 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 00:43:20.580090 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 00:43:20.775896 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Apr 14 00:43:20.912564 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:43:20.929983 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:20.987136 systemd-logind[1448]: New session 2 of user core. Apr 14 00:43:21.022755 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 00:43:21.180352 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:21.198269 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:44000.service: Deactivated successfully. Apr 14 00:43:21.204096 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 00:43:21.205509 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Apr 14 00:43:21.220431 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:44002.service - OpenSSH per-connection server daemon (10.0.0.1:44002). Apr 14 00:43:21.227397 systemd-logind[1448]: Removed session 2. Apr 14 00:43:21.497213 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 44002 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:43:21.501519 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:21.543651 systemd-logind[1448]: New session 3 of user core. Apr 14 00:43:21.568324 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 00:43:21.675468 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:21.694404 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:44002.service: Deactivated successfully. Apr 14 00:43:21.702331 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 00:43:21.708351 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Apr 14 00:43:21.728364 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:44010.service - OpenSSH per-connection server daemon (10.0.0.1:44010). Apr 14 00:43:21.748105 systemd-logind[1448]: Removed session 3. Apr 14 00:43:22.079452 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 44010 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:43:22.092866 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:22.165370 systemd-logind[1448]: New session 4 of user core. Apr 14 00:43:22.184491 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 00:43:22.413873 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:22.451953 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:44010.service: Deactivated successfully. Apr 14 00:43:22.490374 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 00:43:22.565490 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Apr 14 00:43:22.597833 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:44018.service - OpenSSH per-connection server daemon (10.0.0.1:44018). Apr 14 00:43:22.606795 systemd-logind[1448]: Removed session 4. Apr 14 00:43:22.780634 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 44018 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:43:22.792515 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:22.871262 systemd-logind[1448]: New session 5 of user core. Apr 14 00:43:22.938245 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 00:43:23.160200 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 00:43:23.162419 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:43:24.972357 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 00:43:24.977237 (dockerd)[1620]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 00:43:29.358437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 00:43:29.420486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:43:31.115042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:43:31.198478 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:43:32.861255 kubelet[1634]: E0414 00:43:32.861086 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:43:32.872875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:43:32.873019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:43:32.873421 systemd[1]: kubelet.service: Consumed 1.996s CPU time. Apr 14 00:43:38.093775 dockerd[1620]: time="2026-04-14T00:43:38.061415879Z" level=info msg="Starting up" Apr 14 00:43:43.089039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 00:43:43.113412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:43:44.158435 systemd[1]: var-lib-docker-metacopy\x2dcheck1445446589-merged.mount: Deactivated successfully. Apr 14 00:43:44.606210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:43:44.650214 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:43:45.551364 dockerd[1620]: time="2026-04-14T00:43:45.549479353Z" level=info msg="Loading containers: start." Apr 14 00:43:46.884702 kubelet[1669]: E0414 00:43:46.884378 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:43:46.889543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:43:46.890326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:43:46.896470 systemd[1]: kubelet.service: Consumed 2.326s CPU time. Apr 14 00:43:52.094354 kernel: Initializing XFRM netlink socket Apr 14 00:43:54.291027 update_engine[1452]: I20260414 00:43:54.289920 1452 update_attempter.cc:509] Updating boot flags... Apr 14 00:43:54.732679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1747) Apr 14 00:43:55.745732 systemd-networkd[1389]: docker0: Link UP Apr 14 00:43:56.230555 dockerd[1620]: time="2026-04-14T00:43:56.229552066Z" level=info msg="Loading containers: done." Apr 14 00:43:56.646854 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck924466172-merged.mount: Deactivated successfully. Apr 14 00:43:56.849865 dockerd[1620]: time="2026-04-14T00:43:56.842389719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 00:43:56.864772 dockerd[1620]: time="2026-04-14T00:43:56.863507482Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 00:43:56.872647 dockerd[1620]: time="2026-04-14T00:43:56.869380543Z" level=info msg="Daemon has completed initialization" Apr 14 00:43:57.126080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 00:43:57.207270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:43:58.494072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:43:58.590761 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:43:59.986657 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1102985319 wd_nsec: 1102985279 Apr 14 00:44:01.410702 kubelet[1802]: E0414 00:44:01.409486 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:01.432836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:01.435447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:01.437431 systemd[1]: kubelet.service: Consumed 2.539s CPU time. Apr 14 00:44:03.347981 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 00:44:03.354743 dockerd[1620]: time="2026-04-14T00:44:03.345879809Z" level=info msg="API listen on /run/docker.sock" Apr 14 00:44:11.587042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 00:44:11.605307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:44:12.560900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:44:12.563168 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:44:13.767305 kubelet[1835]: E0414 00:44:13.762891 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:13.774029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:13.774953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:13.777397 systemd[1]: kubelet.service: Consumed 1.448s CPU time. Apr 14 00:44:19.016192 containerd[1465]: time="2026-04-14T00:44:19.015978838Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 00:44:20.502115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385388514.mount: Deactivated successfully. Apr 14 00:44:23.834477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 14 00:44:23.898694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:44:24.545369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:44:24.616698 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:44:24.953240 kubelet[1911]: E0414 00:44:24.953185 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:24.964614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:24.965741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:25.343679 containerd[1465]: time="2026-04-14T00:44:25.343358518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:25.345469 containerd[1465]: time="2026-04-14T00:44:25.345188174Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 00:44:25.382817 containerd[1465]: time="2026-04-14T00:44:25.382512137Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:25.461506 containerd[1465]: time="2026-04-14T00:44:25.461201642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:25.504737 containerd[1465]: time="2026-04-14T00:44:25.504414855Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 6.488255231s" Apr 14 00:44:25.505011 containerd[1465]: time="2026-04-14T00:44:25.504860559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 00:44:25.531738 containerd[1465]: time="2026-04-14T00:44:25.530442743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 00:44:28.870429 containerd[1465]: time="2026-04-14T00:44:28.870178724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:28.871391 containerd[1465]: time="2026-04-14T00:44:28.871344945Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 00:44:28.874545 containerd[1465]: time="2026-04-14T00:44:28.874400230Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:28.886912 containerd[1465]: time="2026-04-14T00:44:28.886528321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:28.893605 containerd[1465]: time="2026-04-14T00:44:28.893323295Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 3.360922592s" Apr 14 00:44:28.893880 containerd[1465]: time="2026-04-14T00:44:28.893649572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 00:44:28.897901 containerd[1465]: time="2026-04-14T00:44:28.897811029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 00:44:32.277825 containerd[1465]: time="2026-04-14T00:44:32.277436949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:32.282333 containerd[1465]: time="2026-04-14T00:44:32.282017380Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 00:44:32.287953 containerd[1465]: time="2026-04-14T00:44:32.287763985Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:32.355950 containerd[1465]: time="2026-04-14T00:44:32.355355720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:32.370790 containerd[1465]: time="2026-04-14T00:44:32.370642563Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 3.47274759s" Apr 14 00:44:32.370790 containerd[1465]: time="2026-04-14T00:44:32.370777948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 00:44:32.383406 containerd[1465]: time="2026-04-14T00:44:32.383215942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 00:44:35.085252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 14 00:44:35.118032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:44:35.788678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:44:35.829287 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:44:36.206250 kubelet[1940]: E0414 00:44:36.205531 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:36.219446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:36.219873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:36.574428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12987397.mount: Deactivated successfully. Apr 14 00:44:41.971722 containerd[1465]: time="2026-04-14T00:44:41.971234142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:41.972966 containerd[1465]: time="2026-04-14T00:44:41.971266181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 00:44:41.976701 containerd[1465]: time="2026-04-14T00:44:41.975739082Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:41.993673 containerd[1465]: time="2026-04-14T00:44:41.992621026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:42.014476 containerd[1465]: time="2026-04-14T00:44:42.014163938Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 9.630793854s" Apr 14 00:44:42.015032 containerd[1465]: time="2026-04-14T00:44:42.014863004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 00:44:42.115871 containerd[1465]: time="2026-04-14T00:44:42.114278714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 00:44:43.508936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505793710.mount: Deactivated successfully. Apr 14 00:44:46.338238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 14 00:44:46.358831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:44:46.874194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:44:46.896474 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:44:47.175708 kubelet[2012]: E0414 00:44:47.175165 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:47.180092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:47.180301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:47.727213 containerd[1465]: time="2026-04-14T00:44:47.726718647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:47.729498 containerd[1465]: time="2026-04-14T00:44:47.729185866Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 00:44:47.735552 containerd[1465]: time="2026-04-14T00:44:47.734932512Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:47.884351 containerd[1465]: time="2026-04-14T00:44:47.884066989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:47.889820 containerd[1465]: time="2026-04-14T00:44:47.888421349Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.772671277s" Apr 14 00:44:47.889820 containerd[1465]: time="2026-04-14T00:44:47.888983204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 00:44:47.922726 containerd[1465]: time="2026-04-14T00:44:47.922412957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 00:44:49.033414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916693235.mount: Deactivated successfully. Apr 14 00:44:49.069730 containerd[1465]: time="2026-04-14T00:44:49.069047222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:49.080854 containerd[1465]: time="2026-04-14T00:44:49.078365351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 00:44:49.086749 containerd[1465]: time="2026-04-14T00:44:49.086368849Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:49.094827 containerd[1465]: time="2026-04-14T00:44:49.094558421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:49.101185 containerd[1465]: time="2026-04-14T00:44:49.100965027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.177815124s" Apr 14 00:44:49.101512 containerd[1465]: time="2026-04-14T00:44:49.101221684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 00:44:49.137059 containerd[1465]: time="2026-04-14T00:44:49.136844391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 00:44:50.387852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288998126.mount: Deactivated successfully. Apr 14 00:44:57.332949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 14 00:44:57.372533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:44:57.643306 containerd[1465]: time="2026-04-14T00:44:57.643069195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:57.657080 containerd[1465]: time="2026-04-14T00:44:57.656984664Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 00:44:57.664411 containerd[1465]: time="2026-04-14T00:44:57.664219401Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:57.718955 containerd[1465]: time="2026-04-14T00:44:57.714203255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:44:57.727792 containerd[1465]: time="2026-04-14T00:44:57.726352488Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 8.589321069s" Apr 14 00:44:57.727792 containerd[1465]: time="2026-04-14T00:44:57.726419372Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 00:44:58.111415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:44:58.182293 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:44:59.395726 kubelet[2102]: E0414 00:44:59.395444 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:44:59.425371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:44:59.427333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:44:59.428054 systemd[1]: kubelet.service: Consumed 1.360s CPU time. Apr 14 00:45:09.582410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 14 00:45:09.650256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:10.517371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:10.539217 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:45:10.924887 kubelet[2136]: E0414 00:45:10.924196 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:45:10.936459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:45:10.936775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:45:21.097214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 14 00:45:21.132336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:21.763284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:21.787227 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:45:22.633211 kubelet[2153]: E0414 00:45:22.632141 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:45:22.642402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:45:22.642741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:45:22.645838 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Apr 14 00:45:30.942622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:30.942759 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Apr 14 00:45:30.959661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:31.028012 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-5.scope)... Apr 14 00:45:31.028226 systemd[1]: Reloading... Apr 14 00:45:31.313037 zram_generator::config[2210]: No configuration found. Apr 14 00:45:32.107939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:45:32.443759 systemd[1]: Reloading finished in 1414 ms. Apr 14 00:45:32.723199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:32.732083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:32.733139 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:45:32.733475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:32.762502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:33.865951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:33.887937 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:45:35.376137 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:45:35.376137 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:45:35.376137 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:45:35.380016 kubelet[2257]: I0414 00:45:35.378552 2257 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:45:37.144649 kubelet[2257]: I0414 00:45:37.144562 2257 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:45:37.144649 kubelet[2257]: I0414 00:45:37.144643 2257 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:45:37.145293 kubelet[2257]: I0414 00:45:37.145201 2257 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:45:37.204306 kubelet[2257]: E0414 00:45:37.204085 2257 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:45:37.223751 kubelet[2257]: I0414 00:45:37.223481 2257 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:45:37.252476 kubelet[2257]: E0414 00:45:37.252069 2257 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:45:37.252760 kubelet[2257]: I0414 00:45:37.252508 2257 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:45:37.277364 kubelet[2257]: I0414 00:45:37.277251 2257 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:45:37.278621 kubelet[2257]: I0414 00:45:37.278451 2257 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:45:37.278971 kubelet[2257]: I0414 00:45:37.278533 2257 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:45:37.278971 kubelet[2257]: I0414 00:45:37.278938 2257 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:45:37.278971 kubelet[2257]: I0414 00:45:37.278946 2257 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:45:37.279176 kubelet[2257]: I0414 00:45:37.279110 2257 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:45:37.286795 kubelet[2257]: I0414 00:45:37.286486 2257 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:45:37.286795 kubelet[2257]: I0414 00:45:37.286811 2257 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:45:37.287062 kubelet[2257]: I0414 00:45:37.286978 2257 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:45:37.287062 kubelet[2257]: I0414 00:45:37.286999 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:45:37.292688 kubelet[2257]: E0414 00:45:37.292069 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:45:37.292688 kubelet[2257]: E0414 00:45:37.292182 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:45:37.298708 kubelet[2257]: I0414 00:45:37.296060 2257 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:45:37.299326 kubelet[2257]: I0414 00:45:37.299269 2257 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:45:37.301831 kubelet[2257]: W0414 00:45:37.301751 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:45:37.306154 kubelet[2257]: I0414 00:45:37.306106 2257 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:45:37.306238 kubelet[2257]: I0414 00:45:37.306211 2257 server.go:1289] "Started kubelet" Apr 14 00:45:37.306963 kubelet[2257]: I0414 00:45:37.306643 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:45:37.310747 kubelet[2257]: I0414 00:45:37.307270 2257 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:45:37.310747 kubelet[2257]: I0414 00:45:37.309023 2257 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:45:37.310865 kubelet[2257]: I0414 00:45:37.310813 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:45:37.314487 kubelet[2257]: E0414 00:45:37.310901 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a61295b8ebeb7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:45:37.306135421 +0000 UTC m=+3.387885302,LastTimestamp:2026-04-14 00:45:37.306135421 +0000 UTC m=+3.387885302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:45:37.328734 kubelet[2257]: I0414 00:45:37.328674 2257 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:45:37.328868 kubelet[2257]: I0414 00:45:37.328799 2257 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:45:37.331804 kubelet[2257]: I0414 00:45:37.331719 2257 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:45:37.333210 kubelet[2257]: E0414 00:45:37.333154 2257 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:45:37.336120 kubelet[2257]: I0414 00:45:37.336026 2257 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:45:37.336235 kubelet[2257]: I0414 00:45:37.336191 2257 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:45:37.336981 kubelet[2257]: E0414 00:45:37.336879 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Apr 14 00:45:37.336981 kubelet[2257]: E0414 00:45:37.336935 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:45:37.337370 kubelet[2257]: I0414 00:45:37.337286 2257 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:45:37.337422 kubelet[2257]: I0414 00:45:37.337402 2257 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:45:37.338828 kubelet[2257]: E0414 00:45:37.338078 2257 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:45:37.346646 kubelet[2257]: I0414 00:45:37.346600 2257 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:45:37.375241 kubelet[2257]: I0414 00:45:37.374940 2257 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:45:37.386941 kubelet[2257]: I0414 00:45:37.386124 2257 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:45:37.388331 kubelet[2257]: I0414 00:45:37.387881 2257 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:45:37.391907 kubelet[2257]: I0414 00:45:37.391823 2257 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:45:37.392898 kubelet[2257]: I0414 00:45:37.392017 2257 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:45:37.417003 kubelet[2257]: E0414 00:45:37.413344 2257 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:45:37.440807 kubelet[2257]: E0414 00:45:37.438122 2257 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:45:37.481845 kubelet[2257]: E0414 00:45:37.462959 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:45:37.518647 kubelet[2257]: E0414 00:45:37.516164 2257 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:45:37.530406 kubelet[2257]: I0414 00:45:37.530078 2257 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:45:37.531164 kubelet[2257]: I0414 00:45:37.530874 2257 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:45:37.531387 kubelet[2257]: I0414 00:45:37.531288 2257 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:45:37.541067 kubelet[2257]: E0414 00:45:37.538520 2257 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:45:37.541809 kubelet[2257]: I0414 00:45:37.541758 2257 policy_none.go:49] "None policy: Start" Apr 14 00:45:37.541809 kubelet[2257]: I0414 00:45:37.541804 2257 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:45:37.541975 kubelet[2257]: I0414 00:45:37.541819 2257 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:45:37.545129 kubelet[2257]: E0414 00:45:37.544897 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Apr 14 00:45:37.570689 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 00:45:37.614110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 00:45:37.633560 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 00:45:37.643340 kubelet[2257]: E0414 00:45:37.643062 2257 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:45:37.660760 kubelet[2257]: E0414 00:45:37.660698 2257 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:45:37.660981 kubelet[2257]: I0414 00:45:37.660939 2257 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:45:37.660981 kubelet[2257]: I0414 00:45:37.660952 2257 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:45:37.661332 kubelet[2257]: I0414 00:45:37.661262 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:45:37.698879 kubelet[2257]: E0414 00:45:37.693843 2257 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:45:37.700834 kubelet[2257]: E0414 00:45:37.699442 2257 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:45:37.776399 kubelet[2257]: I0414 00:45:37.775991 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:45:37.777251 kubelet[2257]: E0414 00:45:37.776927 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Apr 14 00:45:37.842080 systemd[1]: Created slice kubepods-burstable-podc33ff94a876322bf38595d0014c62f9d.slice - libcontainer container kubepods-burstable-podc33ff94a876322bf38595d0014c62f9d.slice. Apr 14 00:45:37.848065 kubelet[2257]: I0414 00:45:37.847442 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:37.848065 kubelet[2257]: I0414 00:45:37.847667 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:37.848065 kubelet[2257]: I0414 00:45:37.847696 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:37.848065 kubelet[2257]: I0414 00:45:37.847756 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:37.848065 kubelet[2257]: I0414 00:45:37.847794 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:37.848328 kubelet[2257]: I0414 00:45:37.847817 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:37.848328 kubelet[2257]: I0414 00:45:37.847835 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:37.848328 kubelet[2257]: I0414 00:45:37.847852 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:37.848328 kubelet[2257]: I0414 00:45:37.847934 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:45:37.872056 kubelet[2257]: E0414 00:45:37.871982 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:37.917220 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 14 00:45:37.941540 kubelet[2257]: E0414 00:45:37.939180 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:38.006038 kubelet[2257]: E0414 00:45:38.005468 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Apr 14 00:45:38.024879 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 14 00:45:38.026452 kubelet[2257]: I0414 00:45:38.026396 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:45:38.027702 kubelet[2257]: E0414 00:45:38.027420 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Apr 14 00:45:38.044716 kubelet[2257]: E0414 00:45:38.042874 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:38.059286 kubelet[2257]: E0414 00:45:38.059174 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:38.069970 containerd[1465]: time="2026-04-14T00:45:38.068811053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 00:45:38.175114 kubelet[2257]: E0414 00:45:38.175072 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:38.189675 containerd[1465]: time="2026-04-14T00:45:38.189247901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c33ff94a876322bf38595d0014c62f9d,Namespace:kube-system,Attempt:0,}" Apr 14 00:45:38.295235 kubelet[2257]: E0414 00:45:38.294998 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:38.302170 containerd[1465]: time="2026-04-14T00:45:38.301993181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 00:45:38.497557 kubelet[2257]: E0414 00:45:38.497507 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:45:38.509287 kubelet[2257]: I0414 00:45:38.508228 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:45:38.519095 kubelet[2257]: E0414 00:45:38.518305 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Apr 14 00:45:38.663649 kubelet[2257]: E0414 00:45:38.663525 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:45:38.840503 kubelet[2257]: E0414 00:45:38.839108 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Apr 14 00:45:38.854977 kubelet[2257]: E0414 00:45:38.854721 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:45:38.954261 kubelet[2257]: E0414 00:45:38.949036 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:45:39.178463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338189858.mount: Deactivated successfully. Apr 14 00:45:39.205200 containerd[1465]: time="2026-04-14T00:45:39.204113411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:45:39.214557 containerd[1465]: time="2026-04-14T00:45:39.214121448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:45:39.223336 containerd[1465]: time="2026-04-14T00:45:39.223009943Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:45:39.232707 containerd[1465]: time="2026-04-14T00:45:39.231044544Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:45:39.232707 containerd[1465]: time="2026-04-14T00:45:39.231847579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:45:39.234605 containerd[1465]: time="2026-04-14T00:45:39.234097378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:45:39.304183 containerd[1465]: time="2026-04-14T00:45:39.303741986Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:45:39.323330 containerd[1465]: time="2026-04-14T00:45:39.323231600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:45:39.333111 containerd[1465]: time="2026-04-14T00:45:39.333022707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.263728257s" Apr 14 00:45:39.337099 containerd[1465]: time="2026-04-14T00:45:39.336989492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.146966127s" Apr 14 00:45:39.341937 kubelet[2257]: I0414 00:45:39.334297 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:45:39.345278 containerd[1465]: time="2026-04-14T00:45:39.345189399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.042081604s" Apr 14 00:45:39.365958 kubelet[2257]: E0414 00:45:39.365219 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Apr 14 00:45:39.398550 kubelet[2257]: E0414 00:45:39.396460 2257 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:45:39.661334 containerd[1465]: time="2026-04-14T00:45:39.660524777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:45:39.666291 containerd[1465]: time="2026-04-14T00:45:39.663717430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:45:39.666291 containerd[1465]: time="2026-04-14T00:45:39.663739345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.666291 containerd[1465]: time="2026-04-14T00:45:39.663805649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.675457 containerd[1465]: time="2026-04-14T00:45:39.673350215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:45:39.675457 containerd[1465]: time="2026-04-14T00:45:39.673446008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:45:39.675457 containerd[1465]: time="2026-04-14T00:45:39.673466689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.678527 containerd[1465]: time="2026-04-14T00:45:39.675677166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.688782 containerd[1465]: time="2026-04-14T00:45:39.686184783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:45:39.688782 containerd[1465]: time="2026-04-14T00:45:39.686277260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:45:39.688782 containerd[1465]: time="2026-04-14T00:45:39.686307916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.688782 containerd[1465]: time="2026-04-14T00:45:39.686462735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:45:39.702109 systemd[1]: Started cri-containerd-e54cf989d08504787f108bb75404bb840b275cafb35ab5ea49dc4252d64e4d53.scope - libcontainer container e54cf989d08504787f108bb75404bb840b275cafb35ab5ea49dc4252d64e4d53. Apr 14 00:45:39.728029 systemd[1]: Started cri-containerd-0fd9ece348cefe878654d6658e5d97cd9ae177cc12e18d0ca8519835913e6121.scope - libcontainer container 0fd9ece348cefe878654d6658e5d97cd9ae177cc12e18d0ca8519835913e6121. Apr 14 00:45:39.792321 systemd[1]: Started cri-containerd-6400c867c4b23c4bb5d9dfb4e987dd4601b395025f8a1f365c3009767fd9101b.scope - libcontainer container 6400c867c4b23c4bb5d9dfb4e987dd4601b395025f8a1f365c3009767fd9101b. Apr 14 00:45:39.850557 containerd[1465]: time="2026-04-14T00:45:39.848249064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c33ff94a876322bf38595d0014c62f9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54cf989d08504787f108bb75404bb840b275cafb35ab5ea49dc4252d64e4d53\"" Apr 14 00:45:39.860629 kubelet[2257]: E0414 00:45:39.859101 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:39.869908 containerd[1465]: time="2026-04-14T00:45:39.869161459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fd9ece348cefe878654d6658e5d97cd9ae177cc12e18d0ca8519835913e6121\"" Apr 14 00:45:39.880073 kubelet[2257]: E0414 00:45:39.879978 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:39.996058 containerd[1465]: time="2026-04-14T00:45:39.993077033Z" level=info msg="CreateContainer within sandbox \"0fd9ece348cefe878654d6658e5d97cd9ae177cc12e18d0ca8519835913e6121\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:45:39.997093 containerd[1465]: time="2026-04-14T00:45:39.995375536Z" level=info msg="CreateContainer within sandbox \"e54cf989d08504787f108bb75404bb840b275cafb35ab5ea49dc4252d64e4d53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:45:40.024928 containerd[1465]: time="2026-04-14T00:45:40.024446879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6400c867c4b23c4bb5d9dfb4e987dd4601b395025f8a1f365c3009767fd9101b\"" Apr 14 00:45:40.033177 kubelet[2257]: E0414 00:45:40.033063 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:40.066097 containerd[1465]: time="2026-04-14T00:45:40.065742850Z" level=info msg="CreateContainer within sandbox \"e54cf989d08504787f108bb75404bb840b275cafb35ab5ea49dc4252d64e4d53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08e3db18c3c3fbacf0b82fdc44f565379a6d75641ecaaf77c3f1ce5ffc243ada\"" Apr 14 00:45:40.069390 containerd[1465]: time="2026-04-14T00:45:40.069319564Z" level=info msg="CreateContainer within sandbox \"0fd9ece348cefe878654d6658e5d97cd9ae177cc12e18d0ca8519835913e6121\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d33bba1bb2049d4267b0739863fa7d09bffddd48ca989f005d17993c08e843c5\"" Apr 14 00:45:40.070069 containerd[1465]: time="2026-04-14T00:45:40.070011528Z" level=info msg="StartContainer for \"d33bba1bb2049d4267b0739863fa7d09bffddd48ca989f005d17993c08e843c5\"" Apr 14 00:45:40.072662 containerd[1465]: time="2026-04-14T00:45:40.072181187Z" level=info msg="StartContainer for \"08e3db18c3c3fbacf0b82fdc44f565379a6d75641ecaaf77c3f1ce5ffc243ada\"" Apr 14 00:45:40.075052 containerd[1465]: time="2026-04-14T00:45:40.074956105Z" level=info msg="CreateContainer within sandbox \"6400c867c4b23c4bb5d9dfb4e987dd4601b395025f8a1f365c3009767fd9101b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:45:40.113061 systemd[1]: Started cri-containerd-d33bba1bb2049d4267b0739863fa7d09bffddd48ca989f005d17993c08e843c5.scope - libcontainer container d33bba1bb2049d4267b0739863fa7d09bffddd48ca989f005d17993c08e843c5. Apr 14 00:45:40.120648 containerd[1465]: time="2026-04-14T00:45:40.119086388Z" level=info msg="CreateContainer within sandbox \"6400c867c4b23c4bb5d9dfb4e987dd4601b395025f8a1f365c3009767fd9101b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a6aeac7e96f9faeadd22ba43604696bdf1eb3e0b6361a00c9f65b957cc72e07\"" Apr 14 00:45:40.135094 systemd[1]: Started cri-containerd-08e3db18c3c3fbacf0b82fdc44f565379a6d75641ecaaf77c3f1ce5ffc243ada.scope - libcontainer container 08e3db18c3c3fbacf0b82fdc44f565379a6d75641ecaaf77c3f1ce5ffc243ada. Apr 14 00:45:40.213019 containerd[1465]: time="2026-04-14T00:45:40.212972885Z" level=info msg="StartContainer for \"3a6aeac7e96f9faeadd22ba43604696bdf1eb3e0b6361a00c9f65b957cc72e07\"" Apr 14 00:45:40.333917 systemd[1]: Started cri-containerd-3a6aeac7e96f9faeadd22ba43604696bdf1eb3e0b6361a00c9f65b957cc72e07.scope - libcontainer container 3a6aeac7e96f9faeadd22ba43604696bdf1eb3e0b6361a00c9f65b957cc72e07. Apr 14 00:45:40.502740 containerd[1465]: time="2026-04-14T00:45:40.502129332Z" level=info msg="StartContainer for \"d33bba1bb2049d4267b0739863fa7d09bffddd48ca989f005d17993c08e843c5\" returns successfully" Apr 14 00:45:40.520634 containerd[1465]: time="2026-04-14T00:45:40.519658046Z" level=info msg="StartContainer for \"08e3db18c3c3fbacf0b82fdc44f565379a6d75641ecaaf77c3f1ce5ffc243ada\" returns successfully" Apr 14 00:45:40.550314 kubelet[2257]: E0414 00:45:40.549958 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="3.2s" Apr 14 00:45:40.666965 containerd[1465]: time="2026-04-14T00:45:40.666340286Z" level=info msg="StartContainer for \"3a6aeac7e96f9faeadd22ba43604696bdf1eb3e0b6361a00c9f65b957cc72e07\" returns successfully" Apr 14 00:45:41.060373 kubelet[2257]: I0414 00:45:41.060174 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:45:41.097329 kubelet[2257]: E0414 00:45:41.095900 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:41.097727 kubelet[2257]: E0414 00:45:41.097525 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:41.154639 kubelet[2257]: E0414 00:45:41.154530 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:41.158607 kubelet[2257]: E0414 00:45:41.156906 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:41.161347 kubelet[2257]: E0414 00:45:41.161292 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:41.161621 kubelet[2257]: E0414 00:45:41.161550 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:42.251967 kubelet[2257]: E0414 00:45:42.251748 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:42.251967 kubelet[2257]: E0414 00:45:42.251766 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:42.251967 kubelet[2257]: E0414 00:45:42.251897 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:42.251967 kubelet[2257]: E0414 00:45:42.251968 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:44.725806 kubelet[2257]: E0414 00:45:44.724183 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:44.731371 kubelet[2257]: E0414 00:45:44.731053 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:46.592952 kubelet[2257]: E0414 00:45:46.592903 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:45:46.594234 kubelet[2257]: E0414 00:45:46.593894 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:47.021706 kubelet[2257]: E0414 00:45:47.017390 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 00:45:47.138688 kubelet[2257]: I0414 00:45:47.136956 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:47.150043 kubelet[2257]: I0414 00:45:47.146557 2257 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:45:47.290076 kubelet[2257]: E0414 00:45:47.287146 2257 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a61295b8ebeb7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:45:37.306135421 +0000 UTC m=+3.387885302,LastTimestamp:2026-04-14 00:45:37.306135421 +0000 UTC m=+3.387885302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:45:47.336431 kubelet[2257]: I0414 00:45:47.335018 2257 apiserver.go:52] "Watching apiserver" Apr 14 00:45:47.377784 kubelet[2257]: E0414 00:45:47.377389 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:47.382317 kubelet[2257]: I0414 00:45:47.379486 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:45:47.438939 kubelet[2257]: E0414 00:45:47.433281 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 00:45:47.438939 kubelet[2257]: I0414 00:45:47.433333 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:47.442942 kubelet[2257]: I0414 00:45:47.440078 2257 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:45:47.501837 kubelet[2257]: E0414 00:45:47.499226 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:49.830920 kubelet[2257]: I0414 00:45:49.830876 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:45:49.887658 kubelet[2257]: E0414 00:45:49.887166 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:50.678019 kubelet[2257]: E0414 00:45:50.677985 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:54.829686 kubelet[2257]: I0414 00:45:54.829346 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:45:54.882760 kubelet[2257]: E0414 00:45:54.881886 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:55.963998 kubelet[2257]: E0414 00:45:55.958493 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:56.638798 kubelet[2257]: I0414 00:45:56.638357 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:45:56.756668 kubelet[2257]: E0414 00:45:56.756006 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:56.759285 kubelet[2257]: I0414 00:45:56.758389 2257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.758370583 podStartE2EDuration="7.758370583s" podCreationTimestamp="2026-04-14 00:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:45:55.034135685 +0000 UTC m=+21.115885572" watchObservedRunningTime="2026-04-14 00:45:56.758370583 +0000 UTC m=+22.840120467" Apr 14 00:45:56.997361 kubelet[2257]: E0414 00:45:56.994966 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:57.079624 kubelet[2257]: I0414 00:45:57.075686 2257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.075294504 podStartE2EDuration="3.075294504s" podCreationTimestamp="2026-04-14 00:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:45:56.923547811 +0000 UTC m=+23.005297727" watchObservedRunningTime="2026-04-14 00:45:57.075294504 +0000 UTC m=+23.157044403" Apr 14 00:45:57.912317 systemd[1]: Reloading requested from client PID 2554 ('systemctl') (unit session-5.scope)... Apr 14 00:45:57.912518 systemd[1]: Reloading... Apr 14 00:45:58.199772 kubelet[2257]: E0414 00:45:58.145951 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:58.225667 zram_generator::config[2598]: No configuration found. Apr 14 00:45:58.347150 kubelet[2257]: I0414 00:45:58.345533 2257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.345418615 podStartE2EDuration="2.345418615s" podCreationTimestamp="2026-04-14 00:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:45:57.08937821 +0000 UTC m=+23.171128108" watchObservedRunningTime="2026-04-14 00:45:58.345418615 +0000 UTC m=+24.427168508" Apr 14 00:45:58.586822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:45:58.724961 systemd[1]: Reloading finished in 811 ms. Apr 14 00:45:58.875097 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:58.891041 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:45:58.891308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:58.891355 systemd[1]: kubelet.service: Consumed 14.716s CPU time, 136.0M memory peak, 0B memory swap peak. Apr 14 00:45:58.903144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:45:59.302765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:45:59.334465 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:45:59.656999 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:45:59.656999 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:45:59.656999 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:45:59.656999 kubelet[2640]: I0414 00:45:59.655254 2640 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:45:59.679840 kubelet[2640]: I0414 00:45:59.679785 2640 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:45:59.679840 kubelet[2640]: I0414 00:45:59.679806 2640 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:45:59.689614 kubelet[2640]: I0414 00:45:59.689482 2640 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:45:59.857035 kubelet[2640]: I0414 00:45:59.854287 2640 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:45:59.894261 kubelet[2640]: I0414 00:45:59.894056 2640 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:46:00.070002 kubelet[2640]: E0414 00:46:00.067156 2640 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:46:00.070002 kubelet[2640]: I0414 00:46:00.067187 2640 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:46:00.107822 kubelet[2640]: I0414 00:46:00.106485 2640 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:46:00.114987 kubelet[2640]: I0414 00:46:00.111296 2640 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:46:00.120467 kubelet[2640]: I0414 00:46:00.111493 2640 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:46:00.122061 kubelet[2640]: I0414 00:46:00.121422 2640 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:46:00.122061 kubelet[2640]: I0414 00:46:00.121526 2640 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:46:00.127882 kubelet[2640]: I0414 00:46:00.123654 2640 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:46:00.130628 kubelet[2640]: I0414 00:46:00.129564 2640 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:46:00.130628 kubelet[2640]: I0414 00:46:00.130314 2640 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:46:00.130628 kubelet[2640]: I0414 00:46:00.130561 2640 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:46:00.130628 kubelet[2640]: I0414 00:46:00.130634 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:46:00.156232 kubelet[2640]: I0414 00:46:00.151536 2640 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:46:00.156232 kubelet[2640]: I0414 00:46:00.152398 2640 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:46:00.175259 kubelet[2640]: I0414 00:46:00.174831 2640 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:46:00.175259 kubelet[2640]: I0414 00:46:00.174889 2640 server.go:1289] "Started kubelet" Apr 14 00:46:00.176191 kubelet[2640]: I0414 00:46:00.175200 2640 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:46:00.179231 kubelet[2640]: I0414 00:46:00.179099 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:46:00.180209 kubelet[2640]: I0414 00:46:00.179481 2640 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:46:00.235071 kubelet[2640]: I0414 00:46:00.235040 2640 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:46:00.325809 kubelet[2640]: I0414 00:46:00.322505 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:46:00.355808 kubelet[2640]: I0414 00:46:00.353501 2640 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:46:00.368969 kubelet[2640]: I0414 00:46:00.365907 2640 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:46:00.378084 kubelet[2640]: I0414 00:46:00.376640 2640 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:46:00.378084 kubelet[2640]: I0414 00:46:00.376979 2640 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:46:00.445126 kubelet[2640]: I0414 00:46:00.441035 2640 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:46:00.496280 kubelet[2640]: E0414 00:46:00.495940 2640 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:46:00.509498 kubelet[2640]: I0414 00:46:00.507923 2640 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:46:00.609748 kubelet[2640]: I0414 00:46:00.609392 2640 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:46:00.866593 kubelet[2640]: I0414 00:46:00.866141 2640 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:46:00.887856 kubelet[2640]: I0414 00:46:00.887809 2640 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:46:00.887856 kubelet[2640]: I0414 00:46:00.887843 2640 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:46:00.888066 kubelet[2640]: I0414 00:46:00.887932 2640 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:46:00.888066 kubelet[2640]: I0414 00:46:00.887938 2640 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:46:00.888066 kubelet[2640]: E0414 00:46:00.887976 2640 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:46:00.992817 kubelet[2640]: E0414 00:46:00.992556 2640 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:46:01.153340 kubelet[2640]: I0414 00:46:01.146134 2640 apiserver.go:52] "Watching apiserver" Apr 14 00:46:01.195259 kubelet[2640]: E0414 00:46:01.195202 2640 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:46:01.597851 kubelet[2640]: E0414 00:46:01.597096 2640 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641208 2640 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641224 2640 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641250 2640 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641431 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641443 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641522 2640 policy_none.go:49] "None policy: Start" Apr 14 00:46:01.641849 kubelet[2640]: I0414 00:46:01.641532 2640 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:46:01.645771 kubelet[2640]: I0414 00:46:01.644103 2640 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:46:01.646432 kubelet[2640]: I0414 00:46:01.646416 2640 state_mem.go:75] "Updated machine memory state" Apr 14 00:46:01.737108 kubelet[2640]: E0414 00:46:01.736923 2640 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:46:01.743027 kubelet[2640]: I0414 00:46:01.742486 2640 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:46:01.798528 kubelet[2640]: I0414 00:46:01.744521 2640 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:46:01.838010 kubelet[2640]: I0414 00:46:01.837747 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:46:02.027157 kubelet[2640]: E0414 00:46:02.024059 2640 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:46:02.195363 kubelet[2640]: I0414 00:46:02.195038 2640 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:46:02.298930 kubelet[2640]: I0414 00:46:02.298700 2640 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 00:46:02.298930 kubelet[2640]: I0414 00:46:02.298791 2640 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:46:02.298930 kubelet[2640]: I0414 00:46:02.298813 2640 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:46:02.320806 containerd[1465]: time="2026-04-14T00:46:02.317455206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:46:02.348538 kubelet[2640]: I0414 00:46:02.348190 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:46:02.447517 kubelet[2640]: I0414 00:46:02.444363 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:46:02.524503 kubelet[2640]: I0414 00:46:02.524014 2640 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:46:02.529853 kubelet[2640]: I0414 00:46:02.527129 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:46:02.565842 kubelet[2640]: I0414 00:46:02.564999 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:46:02.566143 kubelet[2640]: I0414 00:46:02.565990 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:46:02.566143 kubelet[2640]: I0414 00:46:02.566128 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c33ff94a876322bf38595d0014c62f9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c33ff94a876322bf38595d0014c62f9d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:46:02.566216 kubelet[2640]: I0414 00:46:02.566148 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:46:02.566216 kubelet[2640]: I0414 00:46:02.566163 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:46:02.570903 kubelet[2640]: I0414 00:46:02.568345 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:46:02.570903 kubelet[2640]: I0414 00:46:02.568473 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:46:02.571289 kubelet[2640]: I0414 00:46:02.571129 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:46:02.571289 kubelet[2640]: I0414 00:46:02.571210 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4faa51a8-6b2a-45e5-9fba-618fa3de53fe-kube-proxy\") pod \"kube-proxy-8jc4w\" (UID: \"4faa51a8-6b2a-45e5-9fba-618fa3de53fe\") " pod="kube-system/kube-proxy-8jc4w" Apr 14 00:46:02.571289 kubelet[2640]: I0414 00:46:02.571226 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4faa51a8-6b2a-45e5-9fba-618fa3de53fe-xtables-lock\") pod \"kube-proxy-8jc4w\" (UID: \"4faa51a8-6b2a-45e5-9fba-618fa3de53fe\") " pod="kube-system/kube-proxy-8jc4w" Apr 14 00:46:02.571289 kubelet[2640]: I0414 00:46:02.571260 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4faa51a8-6b2a-45e5-9fba-618fa3de53fe-lib-modules\") pod \"kube-proxy-8jc4w\" (UID: \"4faa51a8-6b2a-45e5-9fba-618fa3de53fe\") " pod="kube-system/kube-proxy-8jc4w" Apr 14 00:46:02.571289 kubelet[2640]: I0414 00:46:02.571273 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx69p\" (UniqueName: \"kubernetes.io/projected/4faa51a8-6b2a-45e5-9fba-618fa3de53fe-kube-api-access-qx69p\") pod \"kube-proxy-8jc4w\" (UID: \"4faa51a8-6b2a-45e5-9fba-618fa3de53fe\") " pod="kube-system/kube-proxy-8jc4w" Apr 14 00:46:02.571484 kubelet[2640]: I0414 00:46:02.571362 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:46:02.589304 kubelet[2640]: E0414 00:46:02.589209 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 00:46:02.731887 kubelet[2640]: E0414 00:46:02.728808 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 00:46:02.773516 systemd[1]: Created slice kubepods-besteffort-pod4faa51a8_6b2a_45e5_9fba_618fa3de53fe.slice - libcontainer container kubepods-besteffort-pod4faa51a8_6b2a_45e5_9fba_618fa3de53fe.slice. Apr 14 00:46:02.886237 kubelet[2640]: E0414 00:46:02.885447 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:02.915331 kubelet[2640]: E0414 00:46:02.915046 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:02.924404 kubelet[2640]: E0414 00:46:02.922444 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:02.978279 containerd[1465]: time="2026-04-14T00:46:02.977532936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jc4w,Uid:4faa51a8-6b2a-45e5-9fba-618fa3de53fe,Namespace:kube-system,Attempt:0,}" Apr 14 00:46:03.123866 kubelet[2640]: E0414 00:46:03.118427 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:03.267145 containerd[1465]: time="2026-04-14T00:46:03.265541991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:46:03.267145 containerd[1465]: time="2026-04-14T00:46:03.265669717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:46:03.267145 containerd[1465]: time="2026-04-14T00:46:03.265678493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:03.267145 containerd[1465]: time="2026-04-14T00:46:03.266265487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:03.479040 kubelet[2640]: E0414 00:46:03.472006 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:03.482811 kubelet[2640]: E0414 00:46:03.480792 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:03.488059 systemd[1]: Started cri-containerd-01e3f4ebeb842893cb16465091339b18836c1531012f1c447e16305d6492e03b.scope - libcontainer container 01e3f4ebeb842893cb16465091339b18836c1531012f1c447e16305d6492e03b. Apr 14 00:46:03.919830 containerd[1465]: time="2026-04-14T00:46:03.919526681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jc4w,Uid:4faa51a8-6b2a-45e5-9fba-618fa3de53fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"01e3f4ebeb842893cb16465091339b18836c1531012f1c447e16305d6492e03b\"" Apr 14 00:46:03.970033 kubelet[2640]: E0414 00:46:03.969522 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:04.107043 containerd[1465]: time="2026-04-14T00:46:04.103416521Z" level=info msg="CreateContainer within sandbox \"01e3f4ebeb842893cb16465091339b18836c1531012f1c447e16305d6492e03b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:46:04.242209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141155696.mount: Deactivated successfully. Apr 14 00:46:04.258423 kubelet[2640]: I0414 00:46:04.253967 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-cni-plugin\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.258423 kubelet[2640]: I0414 00:46:04.254004 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-xtables-lock\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.258423 kubelet[2640]: I0414 00:46:04.254025 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkzd\" (UniqueName: \"kubernetes.io/projected/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-kube-api-access-dxkzd\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.258423 kubelet[2640]: I0414 00:46:04.254130 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-run\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.258423 kubelet[2640]: I0414 00:46:04.254151 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-cni\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.260095 kubelet[2640]: I0414 00:46:04.254171 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/de8ba3fb-04d1-4bc4-aae0-37e2928afdfb-flannel-cfg\") pod \"kube-flannel-ds-4mj82\" (UID: \"de8ba3fb-04d1-4bc4-aae0-37e2928afdfb\") " pod="kube-flannel/kube-flannel-ds-4mj82" Apr 14 00:46:04.268644 containerd[1465]: time="2026-04-14T00:46:04.267251404Z" level=info msg="CreateContainer within sandbox \"01e3f4ebeb842893cb16465091339b18836c1531012f1c447e16305d6492e03b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81b7f31d2e08785db1460a4277410c0685036c52eacc5fde55c925b23d321bc5\"" Apr 14 00:46:04.290362 containerd[1465]: time="2026-04-14T00:46:04.290267747Z" level=info msg="StartContainer for \"81b7f31d2e08785db1460a4277410c0685036c52eacc5fde55c925b23d321bc5\"" Apr 14 00:46:04.474052 systemd[1]: Created slice kubepods-burstable-podde8ba3fb_04d1_4bc4_aae0_37e2928afdfb.slice - libcontainer container kubepods-burstable-podde8ba3fb_04d1_4bc4_aae0_37e2928afdfb.slice. Apr 14 00:46:04.576462 systemd[1]: Started cri-containerd-81b7f31d2e08785db1460a4277410c0685036c52eacc5fde55c925b23d321bc5.scope - libcontainer container 81b7f31d2e08785db1460a4277410c0685036c52eacc5fde55c925b23d321bc5. Apr 14 00:46:04.595655 kubelet[2640]: E0414 00:46:04.577843 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:04.595655 kubelet[2640]: E0414 00:46:04.578008 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:04.896819 kubelet[2640]: E0414 00:46:04.896144 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:04.917889 containerd[1465]: time="2026-04-14T00:46:04.917542168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4mj82,Uid:de8ba3fb-04d1-4bc4-aae0-37e2928afdfb,Namespace:kube-flannel,Attempt:0,}" Apr 14 00:46:05.159172 containerd[1465]: time="2026-04-14T00:46:05.156642356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:46:05.159172 containerd[1465]: time="2026-04-14T00:46:05.156720435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:46:05.159172 containerd[1465]: time="2026-04-14T00:46:05.156732681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:05.159172 containerd[1465]: time="2026-04-14T00:46:05.156819852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:05.162528 containerd[1465]: time="2026-04-14T00:46:05.162312909Z" level=info msg="StartContainer for \"81b7f31d2e08785db1460a4277410c0685036c52eacc5fde55c925b23d321bc5\" returns successfully" Apr 14 00:46:05.332685 systemd[1]: run-containerd-runc-k8s.io-b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121-runc.MeTaK3.mount: Deactivated successfully. Apr 14 00:46:05.399061 systemd[1]: Started cri-containerd-b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121.scope - libcontainer container b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121. Apr 14 00:46:05.683141 sudo[1602]: pam_unix(sudo:session): session closed for user root Apr 14 00:46:05.703210 sshd[1599]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:05.736214 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:44018.service: Deactivated successfully. Apr 14 00:46:05.799898 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 00:46:05.801865 systemd[1]: session-5.scope: Consumed 42.230s CPU time, 166.0M memory peak, 0B memory swap peak. Apr 14 00:46:05.806729 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Apr 14 00:46:05.813668 systemd-logind[1448]: Removed session 5. Apr 14 00:46:05.895304 kubelet[2640]: E0414 00:46:05.892329 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:06.091461 containerd[1465]: time="2026-04-14T00:46:06.090559116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4mj82,Uid:de8ba3fb-04d1-4bc4-aae0-37e2928afdfb,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\"" Apr 14 00:46:06.108488 kubelet[2640]: E0414 00:46:06.108115 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:06.148151 containerd[1465]: time="2026-04-14T00:46:06.144821359Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 14 00:46:07.009110 kubelet[2640]: E0414 00:46:07.005887 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:08.521959 kubelet[2640]: E0414 00:46:08.521870 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:08.692499 kubelet[2640]: I0414 00:46:08.692420 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8jc4w" podStartSLOduration=7.692398158 podStartE2EDuration="7.692398158s" podCreationTimestamp="2026-04-14 00:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:46:06.094487141 +0000 UTC m=+6.747201266" watchObservedRunningTime="2026-04-14 00:46:08.692398158 +0000 UTC m=+9.345112268" Apr 14 00:46:08.898195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265868354.mount: Deactivated successfully. Apr 14 00:46:09.054302 containerd[1465]: time="2026-04-14T00:46:09.053979509Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:09.056217 containerd[1465]: time="2026-04-14T00:46:09.056157655Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 14 00:46:09.065251 containerd[1465]: time="2026-04-14T00:46:09.064474366Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:09.093027 containerd[1465]: time="2026-04-14T00:46:09.092187779Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:09.104420 containerd[1465]: time="2026-04-14T00:46:09.104179180Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.954823581s" Apr 14 00:46:09.106874 containerd[1465]: time="2026-04-14T00:46:09.104979324Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 14 00:46:09.228141 containerd[1465]: time="2026-04-14T00:46:09.227225864Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 14 00:46:09.294152 containerd[1465]: time="2026-04-14T00:46:09.294058834Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986\"" Apr 14 00:46:09.305735 containerd[1465]: time="2026-04-14T00:46:09.305184292Z" level=info msg="StartContainer for \"ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986\"" Apr 14 00:46:09.475547 systemd[1]: Started cri-containerd-ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986.scope - libcontainer container ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986. Apr 14 00:46:09.584905 systemd[1]: cri-containerd-ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986.scope: Deactivated successfully. Apr 14 00:46:09.588070 containerd[1465]: time="2026-04-14T00:46:09.588011687Z" level=info msg="StartContainer for \"ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986\" returns successfully" Apr 14 00:46:09.791717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986-rootfs.mount: Deactivated successfully. Apr 14 00:46:09.833692 containerd[1465]: time="2026-04-14T00:46:09.832712362Z" level=info msg="shim disconnected" id=ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986 namespace=k8s.io Apr 14 00:46:09.836779 containerd[1465]: time="2026-04-14T00:46:09.835096134Z" level=warning msg="cleaning up after shim disconnected" id=ba9773cf1d77c959d3a943a8ae9dd1e446896dac89ff09447525c8616cdca986 namespace=k8s.io Apr 14 00:46:09.836779 containerd[1465]: time="2026-04-14T00:46:09.836187273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:46:10.133483 kubelet[2640]: E0414 00:46:10.131041 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:10.193674 containerd[1465]: time="2026-04-14T00:46:10.191715017Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 14 00:46:10.604250 kubelet[2640]: E0414 00:46:10.603818 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:11.123252 kubelet[2640]: E0414 00:46:11.123178 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:12.171264 kubelet[2640]: E0414 00:46:12.170404 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:13.151408 kubelet[2640]: E0414 00:46:13.151340 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:16.793431 containerd[1465]: time="2026-04-14T00:46:16.793273114Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:16.795154 containerd[1465]: time="2026-04-14T00:46:16.795065031Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 14 00:46:16.809867 containerd[1465]: time="2026-04-14T00:46:16.809439771Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:16.941993 containerd[1465]: time="2026-04-14T00:46:16.938231627Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:46:16.948112 containerd[1465]: time="2026-04-14T00:46:16.948041658Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 6.756267719s" Apr 14 00:46:16.948112 containerd[1465]: time="2026-04-14T00:46:16.948087244Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 14 00:46:17.034783 containerd[1465]: time="2026-04-14T00:46:17.034674126Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 00:46:17.179054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058981164.mount: Deactivated successfully. Apr 14 00:46:17.195495 containerd[1465]: time="2026-04-14T00:46:17.195312131Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6\"" Apr 14 00:46:17.215678 containerd[1465]: time="2026-04-14T00:46:17.215369958Z" level=info msg="StartContainer for \"44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6\"" Apr 14 00:46:17.493877 systemd[1]: Started cri-containerd-44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6.scope - libcontainer container 44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6. Apr 14 00:46:18.039192 systemd[1]: cri-containerd-44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6.scope: Deactivated successfully. Apr 14 00:46:18.060798 containerd[1465]: time="2026-04-14T00:46:18.059526774Z" level=info msg="StartContainer for \"44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6\" returns successfully" Apr 14 00:46:18.221769 kubelet[2640]: I0414 00:46:18.218549 2640 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 00:46:18.317970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6-rootfs.mount: Deactivated successfully. Apr 14 00:46:18.445376 containerd[1465]: time="2026-04-14T00:46:18.445021717Z" level=info msg="shim disconnected" id=44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6 namespace=k8s.io Apr 14 00:46:18.479953 containerd[1465]: time="2026-04-14T00:46:18.446537187Z" level=warning msg="cleaning up after shim disconnected" id=44157243658953b709e544e87681fb0e6757f8e4604e1d9a11f49950d3f24fd6 namespace=k8s.io Apr 14 00:46:18.480369 containerd[1465]: time="2026-04-14T00:46:18.480217745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:46:18.614696 kubelet[2640]: E0414 00:46:18.613114 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:18.645562 kubelet[2640]: I0414 00:46:18.644324 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9174e9b1-d631-4547-99c1-af41c3ab92b4-config-volume\") pod \"coredns-674b8bbfcf-4rdbz\" (UID: \"9174e9b1-d631-4547-99c1-af41c3ab92b4\") " pod="kube-system/coredns-674b8bbfcf-4rdbz" Apr 14 00:46:18.759532 kubelet[2640]: I0414 00:46:18.757818 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqwh\" (UniqueName: \"kubernetes.io/projected/9174e9b1-d631-4547-99c1-af41c3ab92b4-kube-api-access-xbqwh\") pod \"coredns-674b8bbfcf-4rdbz\" (UID: \"9174e9b1-d631-4547-99c1-af41c3ab92b4\") " pod="kube-system/coredns-674b8bbfcf-4rdbz" Apr 14 00:46:18.759532 kubelet[2640]: E0414 00:46:18.759282 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:18.793875 systemd[1]: Created slice kubepods-burstable-podf36e787b_5273_4efd_9fe5_582c58f1547b.slice - libcontainer container kubepods-burstable-podf36e787b_5273_4efd_9fe5_582c58f1547b.slice. Apr 14 00:46:18.825467 systemd[1]: Created slice kubepods-burstable-pod9174e9b1_d631_4547_99c1_af41c3ab92b4.slice - libcontainer container kubepods-burstable-pod9174e9b1_d631_4547_99c1_af41c3ab92b4.slice. Apr 14 00:46:18.872963 kubelet[2640]: I0414 00:46:18.864893 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bg5\" (UniqueName: \"kubernetes.io/projected/f36e787b-5273-4efd-9fe5-582c58f1547b-kube-api-access-54bg5\") pod \"coredns-674b8bbfcf-ntbgs\" (UID: \"f36e787b-5273-4efd-9fe5-582c58f1547b\") " pod="kube-system/coredns-674b8bbfcf-ntbgs" Apr 14 00:46:18.923832 kubelet[2640]: I0414 00:46:18.923635 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f36e787b-5273-4efd-9fe5-582c58f1547b-config-volume\") pod \"coredns-674b8bbfcf-ntbgs\" (UID: \"f36e787b-5273-4efd-9fe5-582c58f1547b\") " pod="kube-system/coredns-674b8bbfcf-ntbgs" Apr 14 00:46:19.244173 kubelet[2640]: E0414 00:46:19.240058 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:19.301527 containerd[1465]: time="2026-04-14T00:46:19.301354986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rdbz,Uid:9174e9b1-d631-4547-99c1-af41c3ab92b4,Namespace:kube-system,Attempt:0,}" Apr 14 00:46:19.497884 kubelet[2640]: E0414 00:46:19.497492 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:19.516549 containerd[1465]: time="2026-04-14T00:46:19.514743758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ntbgs,Uid:f36e787b-5273-4efd-9fe5-582c58f1547b,Namespace:kube-system,Attempt:0,}" Apr 14 00:46:19.735112 systemd[1]: run-netns-cni\x2dd5df6d2c\x2d0266\x2d709c\x2ddea0\x2ddbe6d7d2cd24.mount: Deactivated successfully. Apr 14 00:46:19.736552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45-shm.mount: Deactivated successfully. Apr 14 00:46:19.801060 containerd[1465]: time="2026-04-14T00:46:19.796980663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rdbz,Uid:9174e9b1-d631-4547-99c1-af41c3ab92b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:46:19.807331 kubelet[2640]: E0414 00:46:19.806777 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:46:19.809046 kubelet[2640]: E0414 00:46:19.808948 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-4rdbz" Apr 14 00:46:19.814653 kubelet[2640]: E0414 00:46:19.811668 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-4rdbz" Apr 14 00:46:19.819000 kubelet[2640]: E0414 00:46:19.818917 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4rdbz_kube-system(9174e9b1-d631-4547-99c1-af41c3ab92b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4rdbz_kube-system(9174e9b1-d631-4547-99c1-af41c3ab92b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12cb11d08dcac38752c51562a953826724af61abff8287d914d6f1df27af9e45\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-4rdbz" podUID="9174e9b1-d631-4547-99c1-af41c3ab92b4" Apr 14 00:46:19.820918 kubelet[2640]: E0414 00:46:19.819320 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:20.027751 containerd[1465]: time="2026-04-14T00:46:20.027035565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ntbgs,Uid:f36e787b-5273-4efd-9fe5-582c58f1547b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:46:20.043005 kubelet[2640]: E0414 00:46:20.038307 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:46:20.055944 containerd[1465]: time="2026-04-14T00:46:20.046069279Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 14 00:46:20.072123 kubelet[2640]: E0414 00:46:20.044136 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-ntbgs" Apr 14 00:46:20.072123 kubelet[2640]: E0414 00:46:20.068795 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-ntbgs" Apr 14 00:46:20.087640 kubelet[2640]: E0414 00:46:20.087080 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ntbgs_kube-system(f36e787b-5273-4efd-9fe5-582c58f1547b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ntbgs_kube-system(f36e787b-5273-4efd-9fe5-582c58f1547b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-ntbgs" podUID="f36e787b-5273-4efd-9fe5-582c58f1547b" Apr 14 00:46:20.346930 containerd[1465]: time="2026-04-14T00:46:20.346466421Z" level=info msg="CreateContainer within sandbox \"b82dee947e438e76de7bb0cd88a27071b25c13283594993019b98d772d3f3121\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ba8a8aeb3f83127b85415814cb6848ec3896312b92c84285e591ec5bb76e90f6\"" Apr 14 00:46:20.357671 containerd[1465]: time="2026-04-14T00:46:20.356274594Z" level=info msg="StartContainer for \"ba8a8aeb3f83127b85415814cb6848ec3896312b92c84285e591ec5bb76e90f6\"" Apr 14 00:46:20.395498 systemd[1]: run-netns-cni\x2da6067d02\x2d374e\x2d4c2f\x2d17fe\x2d5d994acb6cc8.mount: Deactivated successfully. Apr 14 00:46:20.396417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fde70426d008d3abaa69bb0e7f383725bacaff15269a853249f4ace3435c49f-shm.mount: Deactivated successfully. Apr 14 00:46:21.000293 systemd[1]: Started cri-containerd-ba8a8aeb3f83127b85415814cb6848ec3896312b92c84285e591ec5bb76e90f6.scope - libcontainer container ba8a8aeb3f83127b85415814cb6848ec3896312b92c84285e591ec5bb76e90f6. Apr 14 00:46:21.362342 containerd[1465]: time="2026-04-14T00:46:21.362234209Z" level=info msg="StartContainer for \"ba8a8aeb3f83127b85415814cb6848ec3896312b92c84285e591ec5bb76e90f6\" returns successfully" Apr 14 00:46:22.351062 kubelet[2640]: E0414 00:46:22.346739 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:22.612987 kubelet[2640]: I0414 00:46:22.612316 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4mj82" podStartSLOduration=8.760073833 podStartE2EDuration="19.612227552s" podCreationTimestamp="2026-04-14 00:46:03 +0000 UTC" firstStartedPulling="2026-04-14 00:46:06.124399562 +0000 UTC m=+6.777113676" lastFinishedPulling="2026-04-14 00:46:16.976553276 +0000 UTC m=+17.629267395" observedRunningTime="2026-04-14 00:46:22.609216744 +0000 UTC m=+23.261930861" watchObservedRunningTime="2026-04-14 00:46:22.612227552 +0000 UTC m=+23.264941665" Apr 14 00:46:23.235545 systemd-networkd[1389]: flannel.1: Link UP Apr 14 00:46:23.235563 systemd-networkd[1389]: flannel.1: Gained carrier Apr 14 00:46:23.447011 kubelet[2640]: E0414 00:46:23.446932 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:24.742523 systemd-networkd[1389]: flannel.1: Gained IPv6LL Apr 14 00:46:31.910236 kubelet[2640]: E0414 00:46:31.910084 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:31.921392 containerd[1465]: time="2026-04-14T00:46:31.919615708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ntbgs,Uid:f36e787b-5273-4efd-9fe5-582c58f1547b,Namespace:kube-system,Attempt:0,}" Apr 14 00:46:32.035246 systemd-networkd[1389]: cni0: Link UP Apr 14 00:46:32.035253 systemd-networkd[1389]: cni0: Gained carrier Apr 14 00:46:32.043409 systemd-networkd[1389]: cni0: Lost carrier Apr 14 00:46:32.045404 systemd-networkd[1389]: vethf6626d07: Link UP Apr 14 00:46:32.052216 kernel: cni0: port 1(vethf6626d07) entered blocking state Apr 14 00:46:32.052502 kernel: cni0: port 1(vethf6626d07) entered disabled state Apr 14 00:46:32.052545 kernel: vethf6626d07: entered allmulticast mode Apr 14 00:46:32.057753 kernel: vethf6626d07: entered promiscuous mode Apr 14 00:46:32.057967 kernel: cni0: port 1(vethf6626d07) entered blocking state Apr 14 00:46:32.058102 kernel: cni0: port 1(vethf6626d07) entered forwarding state Apr 14 00:46:32.062712 kernel: cni0: port 1(vethf6626d07) entered disabled state Apr 14 00:46:32.074466 kernel: cni0: port 1(vethf6626d07) entered blocking state Apr 14 00:46:32.074562 kernel: cni0: port 1(vethf6626d07) entered forwarding state Apr 14 00:46:32.074600 systemd-networkd[1389]: vethf6626d07: Gained carrier Apr 14 00:46:32.077215 systemd-networkd[1389]: cni0: Gained carrier Apr 14 00:46:32.128637 containerd[1465]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Apr 14 00:46:32.128637 containerd[1465]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:46:32.271241 containerd[1465]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T00:46:32.270713938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:46:32.271241 containerd[1465]: time="2026-04-14T00:46:32.271017749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:46:32.271241 containerd[1465]: time="2026-04-14T00:46:32.271041610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:32.271539 containerd[1465]: time="2026-04-14T00:46:32.271254179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:32.487042 systemd[1]: Started cri-containerd-7251f3614506f5b41c9f56f84ad05cfba3c19d7d815eac8e5fa0c7f0b672b8c6.scope - libcontainer container 7251f3614506f5b41c9f56f84ad05cfba3c19d7d815eac8e5fa0c7f0b672b8c6. Apr 14 00:46:32.607982 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:46:32.793307 containerd[1465]: time="2026-04-14T00:46:32.792632296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ntbgs,Uid:f36e787b-5273-4efd-9fe5-582c58f1547b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7251f3614506f5b41c9f56f84ad05cfba3c19d7d815eac8e5fa0c7f0b672b8c6\"" Apr 14 00:46:32.815209 kubelet[2640]: E0414 00:46:32.814143 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:32.944487 containerd[1465]: time="2026-04-14T00:46:32.942130378Z" level=info msg="CreateContainer within sandbox \"7251f3614506f5b41c9f56f84ad05cfba3c19d7d815eac8e5fa0c7f0b672b8c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:46:33.043075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681771097.mount: Deactivated successfully. Apr 14 00:46:33.048431 containerd[1465]: time="2026-04-14T00:46:33.048345106Z" level=info msg="CreateContainer within sandbox \"7251f3614506f5b41c9f56f84ad05cfba3c19d7d815eac8e5fa0c7f0b672b8c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67f828f7a3160285c1ccc69af8fb5a04daa2ccd3f6ed836c576938366c6c54d3\"" Apr 14 00:46:33.052647 containerd[1465]: time="2026-04-14T00:46:33.051447689Z" level=info msg="StartContainer for \"67f828f7a3160285c1ccc69af8fb5a04daa2ccd3f6ed836c576938366c6c54d3\"" Apr 14 00:46:33.189836 systemd-networkd[1389]: cni0: Gained IPv6LL Apr 14 00:46:33.277322 systemd[1]: Started cri-containerd-67f828f7a3160285c1ccc69af8fb5a04daa2ccd3f6ed836c576938366c6c54d3.scope - libcontainer container 67f828f7a3160285c1ccc69af8fb5a04daa2ccd3f6ed836c576938366c6c54d3. Apr 14 00:46:33.395448 systemd-networkd[1389]: vethf6626d07: Gained IPv6LL Apr 14 00:46:33.555687 containerd[1465]: time="2026-04-14T00:46:33.553709656Z" level=info msg="StartContainer for \"67f828f7a3160285c1ccc69af8fb5a04daa2ccd3f6ed836c576938366c6c54d3\" returns successfully" Apr 14 00:46:34.088329 kubelet[2640]: E0414 00:46:34.088240 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:34.148337 kubelet[2640]: I0414 00:46:34.147352 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ntbgs" podStartSLOduration=33.147332778 podStartE2EDuration="33.147332778s" podCreationTimestamp="2026-04-14 00:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:46:34.146547159 +0000 UTC m=+34.799261272" watchObservedRunningTime="2026-04-14 00:46:34.147332778 +0000 UTC m=+34.800046891" Apr 14 00:46:34.892909 kubelet[2640]: E0414 00:46:34.892614 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:34.895547 containerd[1465]: time="2026-04-14T00:46:34.895472629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rdbz,Uid:9174e9b1-d631-4547-99c1-af41c3ab92b4,Namespace:kube-system,Attempt:0,}" Apr 14 00:46:35.056263 systemd-networkd[1389]: veth6f847274: Link UP Apr 14 00:46:35.060065 kernel: cni0: port 2(veth6f847274) entered blocking state Apr 14 00:46:35.060231 kernel: cni0: port 2(veth6f847274) entered disabled state Apr 14 00:46:35.060249 kernel: veth6f847274: entered allmulticast mode Apr 14 00:46:35.061684 kernel: veth6f847274: entered promiscuous mode Apr 14 00:46:35.075210 kernel: cni0: port 2(veth6f847274) entered blocking state Apr 14 00:46:35.075324 kernel: cni0: port 2(veth6f847274) entered forwarding state Apr 14 00:46:35.075399 systemd-networkd[1389]: veth6f847274: Gained carrier Apr 14 00:46:35.078705 containerd[1465]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 14 00:46:35.078705 containerd[1465]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:46:35.108836 kubelet[2640]: E0414 00:46:35.108406 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:35.155529 containerd[1465]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T00:46:35.150359542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:46:35.155529 containerd[1465]: time="2026-04-14T00:46:35.154941844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:46:35.155529 containerd[1465]: time="2026-04-14T00:46:35.154986254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:35.155529 containerd[1465]: time="2026-04-14T00:46:35.155212977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:46:35.202838 systemd[1]: Started cri-containerd-17064c19af9c67a838d4a5f02b8a5b024af1c63069b84ddd889001588e9350a5.scope - libcontainer container 17064c19af9c67a838d4a5f02b8a5b024af1c63069b84ddd889001588e9350a5. Apr 14 00:46:35.261950 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:46:35.406693 containerd[1465]: time="2026-04-14T00:46:35.406298011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rdbz,Uid:9174e9b1-d631-4547-99c1-af41c3ab92b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"17064c19af9c67a838d4a5f02b8a5b024af1c63069b84ddd889001588e9350a5\"" Apr 14 00:46:35.413942 kubelet[2640]: E0414 00:46:35.413831 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:35.438220 containerd[1465]: time="2026-04-14T00:46:35.438064536Z" level=info msg="CreateContainer within sandbox \"17064c19af9c67a838d4a5f02b8a5b024af1c63069b84ddd889001588e9350a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:46:35.467792 containerd[1465]: time="2026-04-14T00:46:35.467671645Z" level=info msg="CreateContainer within sandbox \"17064c19af9c67a838d4a5f02b8a5b024af1c63069b84ddd889001588e9350a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e57b03b7ce0cbab95dae244048475dae37e5536c397a567e37500aa9590948d\"" Apr 14 00:46:35.474440 containerd[1465]: time="2026-04-14T00:46:35.474338390Z" level=info msg="StartContainer for \"7e57b03b7ce0cbab95dae244048475dae37e5536c397a567e37500aa9590948d\"" Apr 14 00:46:35.563800 systemd[1]: Started cri-containerd-7e57b03b7ce0cbab95dae244048475dae37e5536c397a567e37500aa9590948d.scope - libcontainer container 7e57b03b7ce0cbab95dae244048475dae37e5536c397a567e37500aa9590948d. Apr 14 00:46:35.642183 containerd[1465]: time="2026-04-14T00:46:35.641878035Z" level=info msg="StartContainer for \"7e57b03b7ce0cbab95dae244048475dae37e5536c397a567e37500aa9590948d\" returns successfully" Apr 14 00:46:36.117923 kubelet[2640]: E0414 00:46:36.117394 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:36.202665 kubelet[2640]: I0414 00:46:36.200135 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4rdbz" podStartSLOduration=35.199011183 podStartE2EDuration="35.199011183s" podCreationTimestamp="2026-04-14 00:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:46:36.166827842 +0000 UTC m=+36.819541956" watchObservedRunningTime="2026-04-14 00:46:36.199011183 +0000 UTC m=+36.851725310" Apr 14 00:46:36.518543 systemd-networkd[1389]: veth6f847274: Gained IPv6LL Apr 14 00:46:37.138930 kubelet[2640]: E0414 00:46:37.138452 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:38.184773 kubelet[2640]: E0414 00:46:38.184221 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:04.756984 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:35714.service - OpenSSH per-connection server daemon (10.0.0.1:35714). Apr 14 00:47:05.020691 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 35714 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:05.098707 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:05.194257 systemd-logind[1448]: New session 6 of user core. Apr 14 00:47:05.214108 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 00:47:07.034136 sshd[3694]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:07.110325 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:35714.service: Deactivated successfully. Apr 14 00:47:07.126451 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 00:47:07.126992 systemd[1]: session-6.scope: Consumed 1.309s CPU time. Apr 14 00:47:07.138035 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Apr 14 00:47:07.156847 systemd-logind[1448]: Removed session 6. Apr 14 00:47:12.127649 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:55524.service - OpenSSH per-connection server daemon (10.0.0.1:55524). Apr 14 00:47:12.432987 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 55524 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:12.449369 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:12.465178 systemd-logind[1448]: New session 7 of user core. Apr 14 00:47:12.475898 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 00:47:12.975850 sshd[3745]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:12.986021 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:55524.service: Deactivated successfully. Apr 14 00:47:12.994470 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 00:47:12.999053 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Apr 14 00:47:13.003472 systemd-logind[1448]: Removed session 7. Apr 14 00:47:18.074522 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:60020.service - OpenSSH per-connection server daemon (10.0.0.1:60020). Apr 14 00:47:18.365067 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 60020 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:18.376397 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:18.429879 systemd-logind[1448]: New session 8 of user core. Apr 14 00:47:18.444886 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 00:47:19.041467 sshd[3780]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:19.062547 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:60020.service: Deactivated successfully. Apr 14 00:47:19.074371 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 00:47:19.083595 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Apr 14 00:47:19.087005 systemd-logind[1448]: Removed session 8. Apr 14 00:47:24.085543 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:60032.service - OpenSSH per-connection server daemon (10.0.0.1:60032). Apr 14 00:47:24.397505 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 60032 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:24.437836 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:24.502037 systemd-logind[1448]: New session 9 of user core. Apr 14 00:47:24.516358 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 00:47:25.073490 sshd[3815]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:25.082871 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:60032.service: Deactivated successfully. Apr 14 00:47:25.094786 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:47:25.098865 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:47:25.114555 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Apr 14 00:47:25.120501 systemd-logind[1448]: Removed session 9. Apr 14 00:47:25.376323 sshd[3836]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:25.392844 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:25.409698 systemd-logind[1448]: New session 10 of user core. Apr 14 00:47:25.422552 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:47:26.069657 sshd[3836]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:26.084411 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:60044.service: Deactivated successfully. Apr 14 00:47:26.088390 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:47:26.091403 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:47:26.102031 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:48444.service - OpenSSH per-connection server daemon (10.0.0.1:48444). Apr 14 00:47:26.110243 systemd-logind[1448]: Removed session 10. Apr 14 00:47:26.152410 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 48444 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:26.155669 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:26.167022 systemd-logind[1448]: New session 11 of user core. Apr 14 00:47:26.178090 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:47:26.661295 sshd[3849]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:26.668977 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:48444.service: Deactivated successfully. Apr 14 00:47:26.671619 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:47:26.673767 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:47:26.676775 systemd-logind[1448]: Removed session 11. Apr 14 00:47:26.918748 kubelet[2640]: E0414 00:47:26.918137 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:30.894473 kubelet[2640]: E0414 00:47:30.894283 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:31.776138 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:48454.service - OpenSSH per-connection server daemon (10.0.0.1:48454). Apr 14 00:47:31.909996 kubelet[2640]: E0414 00:47:31.906786 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:32.161691 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 48454 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:32.167214 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:32.182888 systemd-logind[1448]: New session 12 of user core. Apr 14 00:47:32.207968 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:47:32.905559 sshd[3897]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:32.920458 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:48454.service: Deactivated successfully. Apr 14 00:47:32.928908 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:47:32.930077 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:47:32.931188 systemd-logind[1448]: Removed session 12. Apr 14 00:47:33.896289 kubelet[2640]: E0414 00:47:33.895534 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:37.945418 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:37128.service - OpenSSH per-connection server daemon (10.0.0.1:37128). Apr 14 00:47:37.993458 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 37128 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:37.996331 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:38.006752 systemd-logind[1448]: New session 13 of user core. Apr 14 00:47:38.021291 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:47:38.201014 sshd[3933]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:38.218936 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:37128.service: Deactivated successfully. Apr 14 00:47:38.230655 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:47:38.237127 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:47:38.252849 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:37132.service - OpenSSH per-connection server daemon (10.0.0.1:37132). Apr 14 00:47:38.259480 systemd-logind[1448]: Removed session 13. Apr 14 00:47:38.432476 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 37132 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:38.434859 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:38.445873 systemd-logind[1448]: New session 14 of user core. Apr 14 00:47:38.469205 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:47:39.256797 sshd[3948]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:39.265530 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:37132.service: Deactivated successfully. Apr 14 00:47:39.267855 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:47:39.269160 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:47:39.277400 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:37148.service - OpenSSH per-connection server daemon (10.0.0.1:37148). Apr 14 00:47:39.280196 systemd-logind[1448]: Removed session 14. Apr 14 00:47:39.559316 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 37148 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:39.570464 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:39.584843 systemd-logind[1448]: New session 15 of user core. Apr 14 00:47:39.604498 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:47:40.850420 sshd[3961]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:40.864407 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:37148.service: Deactivated successfully. Apr 14 00:47:40.869977 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:47:40.871806 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:47:40.882383 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:37150.service - OpenSSH per-connection server daemon (10.0.0.1:37150). Apr 14 00:47:40.886871 systemd-logind[1448]: Removed session 15. Apr 14 00:47:40.972989 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 37150 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:40.975208 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:40.983763 systemd-logind[1448]: New session 16 of user core. Apr 14 00:47:40.995468 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:47:41.778872 sshd[3987]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:41.793445 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:37150.service: Deactivated successfully. Apr 14 00:47:41.804383 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:47:41.807444 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:47:41.824776 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:37164.service - OpenSSH per-connection server daemon (10.0.0.1:37164). Apr 14 00:47:41.827395 systemd-logind[1448]: Removed session 16. Apr 14 00:47:41.878835 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 37164 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:41.882506 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:41.894469 kubelet[2640]: E0414 00:47:41.894346 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:41.895546 kubelet[2640]: E0414 00:47:41.895244 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:41.901386 systemd-logind[1448]: New session 17 of user core. Apr 14 00:47:41.941521 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:47:42.349120 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:42.354492 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:37164.service: Deactivated successfully. Apr 14 00:47:42.356746 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:47:42.359075 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:47:42.362869 systemd-logind[1448]: Removed session 17. Apr 14 00:47:47.494160 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Apr 14 00:47:47.877495 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:47.901539 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:48.008669 systemd-logind[1448]: New session 18 of user core. Apr 14 00:47:48.028234 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:47:48.904900 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:48.939343 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:58482.service: Deactivated successfully. Apr 14 00:47:48.994148 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:47:49.007858 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:47:49.018183 systemd-logind[1448]: Removed session 18. Apr 14 00:47:51.944485 kubelet[2640]: E0414 00:47:51.943346 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:47:54.005518 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Apr 14 00:47:54.400078 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:47:54.406483 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:47:54.506991 systemd-logind[1448]: New session 19 of user core. Apr 14 00:47:54.581234 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 00:47:55.280228 sshd[4082]: pam_unix(sshd:session): session closed for user core Apr 14 00:47:55.295844 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:58486.service: Deactivated successfully. Apr 14 00:47:55.301346 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 00:47:55.306283 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Apr 14 00:47:55.312971 systemd-logind[1448]: Removed session 19. Apr 14 00:48:00.338414 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:32992.service - OpenSSH per-connection server daemon (10.0.0.1:32992). Apr 14 00:48:00.478010 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 32992 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:48:00.482543 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:48:00.504304 systemd-logind[1448]: New session 20 of user core. Apr 14 00:48:00.528010 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 00:48:00.806421 sshd[4118]: pam_unix(sshd:session): session closed for user core Apr 14 00:48:00.820439 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:32992.service: Deactivated successfully. Apr 14 00:48:00.831380 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 00:48:00.843229 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Apr 14 00:48:00.848707 systemd-logind[1448]: Removed session 20. Apr 14 00:48:05.854381 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:57074.service - OpenSSH per-connection server daemon (10.0.0.1:57074). Apr 14 00:48:06.057411 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 57074 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 00:48:06.063525 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:48:06.078199 systemd-logind[1448]: New session 21 of user core. Apr 14 00:48:06.084964 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 00:48:06.481162 sshd[4161]: pam_unix(sshd:session): session closed for user core Apr 14 00:48:06.509416 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:57074.service: Deactivated successfully. Apr 14 00:48:06.538184 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 00:48:06.544855 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Apr 14 00:48:06.613081 systemd-logind[1448]: Removed session 21.