Apr 22 23:47:49.871797 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 22 21:57:11 -00 2026 Apr 22 23:47:49.871875 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:47:49.871890 kernel: BIOS-provided physical RAM map: Apr 22 23:47:49.871897 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 22 23:47:49.871904 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 22 23:47:49.871910 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 22 23:47:49.871918 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 22 23:47:49.871925 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 22 23:47:49.871933 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 22 23:47:49.871941 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 22 23:47:49.871950 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 22 23:47:49.871957 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 22 23:47:49.871964 kernel: NX (Execute Disable) protection: active Apr 22 23:47:49.871971 kernel: APIC: Static calls initialized Apr 22 23:47:49.871979 kernel: SMBIOS 2.8 present. Apr 22 23:47:49.871989 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 22 23:47:49.871996 kernel: DMI: Memory slots populated: 1/1 Apr 22 23:47:49.872004 kernel: Hypervisor detected: KVM Apr 22 23:47:49.872012 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 22 23:47:49.872020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 22 23:47:49.872029 kernel: kvm-clock: using sched offset of 13308523416 cycles Apr 22 23:47:49.872038 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 22 23:47:49.872046 kernel: tsc: Detected 2793.438 MHz processor Apr 22 23:47:49.872054 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 22 23:47:49.872065 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 22 23:47:49.872073 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 22 23:47:49.872081 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 22 23:47:49.872089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 22 23:47:49.872098 kernel: Using GB pages for direct mapping Apr 22 23:47:49.872106 kernel: ACPI: Early table checksum verification disabled Apr 22 23:47:49.872115 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 22 23:47:49.872126 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872136 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872145 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872154 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 22 23:47:49.872164 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872172 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872180 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872189 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:47:49.872198 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 22 23:47:49.872209 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 22 23:47:49.872218 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 22 23:47:49.872226 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 22 23:47:49.872238 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 22 23:47:49.872248 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 22 23:47:49.872258 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 22 23:47:49.872267 kernel: No NUMA configuration found Apr 22 23:47:49.872278 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 22 23:47:49.872287 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 22 23:47:49.872295 kernel: Zone ranges: Apr 22 23:47:49.872306 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 22 23:47:49.872314 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 22 23:47:49.872322 kernel: Normal empty Apr 22 23:47:49.872330 kernel: Device empty Apr 22 23:47:49.872338 kernel: Movable zone start for each node Apr 22 23:47:49.872346 kernel: Early memory node ranges Apr 22 23:47:49.872356 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 22 23:47:49.872368 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 22 23:47:49.872377 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 22 23:47:49.872388 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 22 23:47:49.872398 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 22 23:47:49.872408 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 22 23:47:49.872419 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 22 23:47:49.872429 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 22 23:47:49.872439 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 22 23:47:49.872451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 22 23:47:49.872461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 22 23:47:49.872471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 22 23:47:49.872480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 22 23:47:49.872488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 22 23:47:49.872497 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 22 23:47:49.872505 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 22 23:47:49.872515 kernel: TSC deadline timer available Apr 22 23:47:49.872523 kernel: CPU topo: Max. logical packages: 1 Apr 22 23:47:49.872531 kernel: CPU topo: Max. logical dies: 1 Apr 22 23:47:49.872539 kernel: CPU topo: Max. dies per package: 1 Apr 22 23:47:49.872549 kernel: CPU topo: Max. threads per core: 1 Apr 22 23:47:49.874406 kernel: CPU topo: Num. cores per package: 4 Apr 22 23:47:49.874440 kernel: CPU topo: Num. threads per package: 4 Apr 22 23:47:49.874459 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 22 23:47:49.874468 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 22 23:47:49.874476 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 22 23:47:49.874485 kernel: kvm-guest: setup PV sched yield Apr 22 23:47:49.874493 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 22 23:47:49.874501 kernel: Booting paravirtualized kernel on KVM Apr 22 23:47:49.874510 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 22 23:47:49.874522 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 22 23:47:49.874532 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 22 23:47:49.874540 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 22 23:47:49.874549 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 22 23:47:49.874557 kernel: kvm-guest: PV spinlocks enabled Apr 22 23:47:49.874678 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 22 23:47:49.874688 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:47:49.875002 kernel: random: crng init done Apr 22 23:47:49.875012 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 22 23:47:49.875021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 22 23:47:49.875030 kernel: Fallback order for Node 0: 0 Apr 22 23:47:49.875039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 22 23:47:49.875048 kernel: Policy zone: DMA32 Apr 22 23:47:49.875056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 22 23:47:49.876029 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 22 23:47:49.876087 kernel: ftrace: allocating 40157 entries in 157 pages Apr 22 23:47:49.876097 kernel: ftrace: allocated 157 pages with 5 groups Apr 22 23:47:49.876109 kernel: Dynamic Preempt: voluntary Apr 22 23:47:49.876119 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 22 23:47:49.876129 kernel: rcu: RCU event tracing is enabled. Apr 22 23:47:49.876140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 22 23:47:49.876149 kernel: Trampoline variant of Tasks RCU enabled. Apr 22 23:47:49.876177 kernel: Rude variant of Tasks RCU enabled. Apr 22 23:47:49.876187 kernel: Tracing variant of Tasks RCU enabled. Apr 22 23:47:49.876197 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 22 23:47:49.876208 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 22 23:47:49.876219 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:47:49.876230 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:47:49.876241 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:47:49.876253 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 22 23:47:49.876265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 22 23:47:49.876275 kernel: Console: colour VGA+ 80x25 Apr 22 23:47:49.876291 kernel: printk: legacy console [ttyS0] enabled Apr 22 23:47:49.876303 kernel: ACPI: Core revision 20240827 Apr 22 23:47:49.876313 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 22 23:47:49.876323 kernel: APIC: Switch to symmetric I/O mode setup Apr 22 23:47:49.876334 kernel: x2apic enabled Apr 22 23:47:49.876345 kernel: APIC: Switched APIC routing to: physical x2apic Apr 22 23:47:49.876358 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 22 23:47:49.876370 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 22 23:47:49.876382 kernel: kvm-guest: setup PV IPIs Apr 22 23:47:49.876393 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 22 23:47:49.876406 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:47:49.876416 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 22 23:47:49.876426 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 22 23:47:49.876436 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 22 23:47:49.876446 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 22 23:47:49.876455 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 22 23:47:49.876465 kernel: Spectre V2 : Mitigation: Retpolines Apr 22 23:47:49.876477 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 22 23:47:49.876487 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 22 23:47:49.876496 kernel: RETBleed: Vulnerable Apr 22 23:47:49.876505 kernel: Speculative Store Bypass: Vulnerable Apr 22 23:47:49.876514 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 22 23:47:49.876523 kernel: GDS: Unknown: Dependent on hypervisor status Apr 22 23:47:49.876532 kernel: active return thunk: its_return_thunk Apr 22 23:47:49.876544 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 22 23:47:49.876554 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 22 23:47:49.877427 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 22 23:47:49.877437 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 22 23:47:49.877443 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 22 23:47:49.877450 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 22 23:47:49.877457 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 22 23:47:49.877483 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 22 23:47:49.877489 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 22 23:47:49.877496 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 22 23:47:49.877502 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 22 23:47:49.877509 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 22 23:47:49.877516 kernel: Freeing SMP alternatives memory: 32K Apr 22 23:47:49.877522 kernel: pid_max: default: 32768 minimum: 301 Apr 22 23:47:49.877530 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 22 23:47:49.877536 kernel: landlock: Up and running. Apr 22 23:47:49.877543 kernel: SELinux: Initializing. Apr 22 23:47:49.877549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:47:49.877556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:47:49.878126 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 22 23:47:49.878139 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 22 23:47:49.878166 kernel: signal: max sigframe size: 3632 Apr 22 23:47:49.878175 kernel: rcu: Hierarchical SRCU implementation. Apr 22 23:47:49.878185 kernel: rcu: Max phase no-delay instances is 400. Apr 22 23:47:49.878194 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 22 23:47:49.878203 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 22 23:47:49.878214 kernel: smp: Bringing up secondary CPUs ... Apr 22 23:47:49.878224 kernel: smpboot: x86: Booting SMP configuration: Apr 22 23:47:49.878234 kernel: .... node #0, CPUs: #1 #2 #3 Apr 22 23:47:49.878240 kernel: smp: Brought up 1 node, 4 CPUs Apr 22 23:47:49.878247 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 22 23:47:49.878254 kernel: Memory: 2444328K/2571752K available (14336K kernel code, 2453K rwdata, 31656K rodata, 15552K init, 2472K bss, 121536K reserved, 0K cma-reserved) Apr 22 23:47:49.878261 kernel: devtmpfs: initialized Apr 22 23:47:49.878267 kernel: x86/mm: Memory block size: 128MB Apr 22 23:47:49.878273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 22 23:47:49.878282 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 22 23:47:49.878288 kernel: pinctrl core: initialized pinctrl subsystem Apr 22 23:47:49.878295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 22 23:47:49.878301 kernel: audit: initializing netlink subsys (disabled) Apr 22 23:47:49.878308 kernel: audit: type=2000 audit(1776901654.615:1): state=initialized audit_enabled=0 res=1 Apr 22 23:47:49.878314 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 22 23:47:49.878320 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 22 23:47:49.878327 kernel: cpuidle: using governor menu Apr 22 23:47:49.878335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 22 23:47:49.878341 kernel: dca service started, version 1.12.1 Apr 22 23:47:49.878348 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 22 23:47:49.878355 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 22 23:47:49.878361 kernel: PCI: Using configuration type 1 for base access Apr 22 23:47:49.878367 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 22 23:47:49.878374 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 22 23:47:49.878382 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 22 23:47:49.878389 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 22 23:47:49.878395 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 22 23:47:49.878401 kernel: ACPI: Added _OSI(Module Device) Apr 22 23:47:49.878408 kernel: ACPI: Added _OSI(Processor Device) Apr 22 23:47:49.878414 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 22 23:47:49.878421 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 22 23:47:49.878429 kernel: ACPI: Interpreter enabled Apr 22 23:47:49.878436 kernel: ACPI: PM: (supports S0 S3 S5) Apr 22 23:47:49.878442 kernel: ACPI: Using IOAPIC for interrupt routing Apr 22 23:47:49.878448 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 22 23:47:49.878455 kernel: PCI: Using E820 reservations for host bridge windows Apr 22 23:47:49.878462 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 22 23:47:49.878468 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 22 23:47:49.879101 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 22 23:47:49.879190 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 22 23:47:49.879265 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 22 23:47:49.879273 kernel: PCI host bridge to bus 0000:00 Apr 22 23:47:49.879351 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 22 23:47:49.879422 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 22 23:47:49.879488 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 22 23:47:49.879553 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 22 23:47:49.880281 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 22 23:47:49.880349 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 22 23:47:49.880416 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 22 23:47:49.880943 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 22 23:47:49.881034 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 22 23:47:49.881109 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 22 23:47:49.882337 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 22 23:47:49.882846 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 22 23:47:49.882936 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 22 23:47:49.883021 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 22 23:47:49.883095 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 22 23:47:49.883170 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 22 23:47:49.883243 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 22 23:47:49.883323 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 22 23:47:49.883399 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 22 23:47:49.883472 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 22 23:47:49.883544 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 22 23:47:49.884129 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 22 23:47:49.884808 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 22 23:47:49.884898 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 22 23:47:49.884974 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 22 23:47:49.885050 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 22 23:47:49.885134 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 22 23:47:49.885209 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 22 23:47:49.885282 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Apr 22 23:47:49.886470 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 22 23:47:49.886824 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 22 23:47:49.886910 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 22 23:47:49.886993 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 22 23:47:49.887070 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 22 23:47:49.887079 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 22 23:47:49.887551 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 22 23:47:49.888085 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 22 23:47:49.888118 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 22 23:47:49.888124 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 22 23:47:49.888131 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 22 23:47:49.888138 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 22 23:47:49.888144 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 22 23:47:49.888421 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 22 23:47:49.888430 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 22 23:47:49.888438 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 22 23:47:49.888445 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 22 23:47:49.888453 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 22 23:47:49.888461 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 22 23:47:49.888468 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 22 23:47:49.889519 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 22 23:47:49.890309 kernel: iommu: Default domain type: Translated Apr 22 23:47:49.890319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 22 23:47:49.890328 kernel: PCI: Using ACPI for IRQ routing Apr 22 23:47:49.890336 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 22 23:47:49.890345 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 22 23:47:49.890353 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 22 23:47:49.890950 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 22 23:47:49.891057 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 22 23:47:49.891130 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 22 23:47:49.891138 kernel: vgaarb: loaded Apr 22 23:47:49.891145 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 22 23:47:49.891152 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 22 23:47:49.891159 kernel: clocksource: Switched to clocksource kvm-clock Apr 22 23:47:49.891169 kernel: VFS: Disk quotas dquot_6.6.0 Apr 22 23:47:49.891176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 22 23:47:49.891182 kernel: pnp: PnP ACPI init Apr 22 23:47:49.892057 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 22 23:47:49.892094 kernel: pnp: PnP ACPI: found 6 devices Apr 22 23:47:49.892101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 22 23:47:49.892108 kernel: NET: Registered PF_INET protocol family Apr 22 23:47:49.892118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 22 23:47:49.892125 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 22 23:47:49.892132 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 22 23:47:49.892139 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 22 23:47:49.892145 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 22 23:47:49.892152 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 22 23:47:49.892158 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:47:49.892167 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:47:49.892174 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 22 23:47:49.892180 kernel: NET: Registered PF_XDP protocol family Apr 22 23:47:49.892255 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 22 23:47:49.892322 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 22 23:47:49.892388 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 22 23:47:49.892456 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 22 23:47:49.892522 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 22 23:47:49.893064 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 22 23:47:49.893096 kernel: PCI: CLS 0 bytes, default 64 Apr 22 23:47:49.893103 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 22 23:47:49.893110 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:47:49.893116 kernel: Initialise system trusted keyrings Apr 22 23:47:49.893127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 22 23:47:49.893134 kernel: Key type asymmetric registered Apr 22 23:47:49.893140 kernel: Asymmetric key parser 'x509' registered Apr 22 23:47:49.893146 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 22 23:47:49.893153 kernel: io scheduler mq-deadline registered Apr 22 23:47:49.893159 kernel: io scheduler kyber registered Apr 22 23:47:49.893166 kernel: io scheduler bfq registered Apr 22 23:47:49.893174 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 22 23:47:49.893182 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 22 23:47:49.893188 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 22 23:47:49.893195 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 22 23:47:49.893201 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 22 23:47:49.893208 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 22 23:47:49.893215 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 22 23:47:49.893223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 22 23:47:49.893229 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 22 23:47:49.893313 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 22 23:47:49.893322 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 22 23:47:49.893390 kernel: rtc_cmos 00:04: registered as rtc0 Apr 22 23:47:49.893458 kernel: rtc_cmos 00:04: setting system clock to 2026-04-22T23:47:42 UTC (1776901662) Apr 22 23:47:49.893526 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 22 23:47:49.893537 kernel: intel_pstate: CPU model not supported Apr 22 23:47:49.893544 kernel: NET: Registered PF_INET6 protocol family Apr 22 23:47:49.893550 kernel: Segment Routing with IPv6 Apr 22 23:47:49.893557 kernel: In-situ OAM (IOAM) with IPv6 Apr 22 23:47:49.893678 kernel: NET: Registered PF_PACKET protocol family Apr 22 23:47:49.893685 kernel: Key type dns_resolver registered Apr 22 23:47:49.893691 kernel: IPI shorthand broadcast: enabled Apr 22 23:47:49.893748 kernel: sched_clock: Marking stable (5306092034, 1667316072)->(8020724879, -1047316773) Apr 22 23:47:49.893755 kernel: registered taskstats version 1 Apr 22 23:47:49.893762 kernel: Loading compiled-in X.509 certificates Apr 22 23:47:49.893769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 0793482f0b1477a4dee00a55cce942e30dec635a' Apr 22 23:47:49.893776 kernel: Demotion targets for Node 0: null Apr 22 23:47:49.893783 kernel: Key type .fscrypt registered Apr 22 23:47:49.893789 kernel: Key type fscrypt-provisioning registered Apr 22 23:47:49.893797 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 22 23:47:49.893804 kernel: ima: Allocated hash algorithm: sha1 Apr 22 23:47:49.893810 kernel: ima: No architecture policies found Apr 22 23:47:49.893817 kernel: clk: Disabling unused clocks Apr 22 23:47:49.893824 kernel: Freeing unused kernel image (initmem) memory: 15552K Apr 22 23:47:49.893830 kernel: Write protecting the kernel read-only data: 47104k Apr 22 23:47:49.893837 kernel: Freeing unused kernel image (rodata/data gap) memory: 1112K Apr 22 23:47:49.893845 kernel: Run /init as init process Apr 22 23:47:49.893851 kernel: with arguments: Apr 22 23:47:49.893858 kernel: /init Apr 22 23:47:49.893864 kernel: with environment: Apr 22 23:47:49.893870 kernel: HOME=/ Apr 22 23:47:49.893877 kernel: TERM=linux Apr 22 23:47:49.893883 kernel: SCSI subsystem initialized Apr 22 23:47:49.893889 kernel: libata version 3.00 loaded. Apr 22 23:47:49.893975 kernel: ahci 0000:00:1f.2: version 3.0 Apr 22 23:47:49.893985 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 22 23:47:49.894069 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 22 23:47:49.894146 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 22 23:47:49.894223 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 22 23:47:49.894315 kernel: scsi host0: ahci Apr 22 23:47:49.894399 kernel: scsi host1: ahci Apr 22 23:47:49.894482 kernel: scsi host2: ahci Apr 22 23:47:49.894659 kernel: scsi host3: ahci Apr 22 23:47:49.894859 kernel: scsi host4: ahci Apr 22 23:47:49.894945 kernel: scsi host5: ahci Apr 22 23:47:49.894958 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Apr 22 23:47:49.894966 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Apr 22 23:47:49.894972 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Apr 22 23:47:49.894979 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Apr 22 23:47:49.894986 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Apr 22 23:47:49.894993 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Apr 22 23:47:49.895001 kernel: hrtimer: interrupt took 4639863 ns Apr 22 23:47:49.895009 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 22 23:47:49.895015 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 22 23:47:49.895022 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 22 23:47:49.895029 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 22 23:47:49.895035 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:47:49.895042 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 22 23:47:49.895050 kernel: ata3.00: applying bridge limits Apr 22 23:47:49.895057 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 22 23:47:49.895063 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 22 23:47:49.895070 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:47:49.895077 kernel: ata3.00: configured for UDMA/100 Apr 22 23:47:49.895170 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 22 23:47:49.895260 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 22 23:47:49.895346 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 22 23:47:49.895421 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 22 23:47:49.895429 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 22 23:47:49.895436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 22 23:47:49.895443 kernel: GPT:16515071 != 27000831 Apr 22 23:47:49.895452 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 22 23:47:49.895461 kernel: GPT:16515071 != 27000831 Apr 22 23:47:49.895467 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 22 23:47:49.895474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 23:47:49.895555 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 22 23:47:49.895661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 22 23:47:49.895668 kernel: device-mapper: uevent: version 1.0.3 Apr 22 23:47:49.895675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 22 23:47:49.895684 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 22 23:47:49.895691 kernel: raid6: avx512x4 gen() 32453 MB/s Apr 22 23:47:49.895743 kernel: raid6: avx512x2 gen() 29359 MB/s Apr 22 23:47:49.895751 kernel: raid6: avx512x1 gen() 20808 MB/s Apr 22 23:47:49.895759 kernel: raid6: avx2x4 gen() 19532 MB/s Apr 22 23:47:49.895766 kernel: raid6: avx2x2 gen() 17680 MB/s Apr 22 23:47:49.895773 kernel: raid6: avx2x1 gen() 13444 MB/s Apr 22 23:47:49.895780 kernel: raid6: using algorithm avx512x4 gen() 32453 MB/s Apr 22 23:47:49.895787 kernel: raid6: .... xor() 6848 MB/s, rmw enabled Apr 22 23:47:49.895794 kernel: raid6: using avx512x2 recovery algorithm Apr 22 23:47:49.895801 kernel: xor: automatically using best checksumming function avx Apr 22 23:47:49.895807 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 22 23:47:49.895816 kernel: BTRFS: device fsid 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (182) Apr 22 23:47:49.895823 kernel: BTRFS info (device dm-0): first mount of filesystem 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd Apr 22 23:47:49.895830 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:47:49.895836 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 22 23:47:49.895843 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 22 23:47:49.895851 kernel: loop: module loaded Apr 22 23:47:49.895858 kernel: loop0: detected capacity change from 0 to 100560 Apr 22 23:47:49.895866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 22 23:47:49.895875 systemd[1]: Successfully made /usr/ read-only. Apr 22 23:47:49.895885 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:47:49.895892 systemd[1]: Detected virtualization kvm. Apr 22 23:47:49.895899 systemd[1]: Detected architecture x86-64. Apr 22 23:47:49.895907 systemd[1]: Running in initrd. Apr 22 23:47:49.895914 systemd[1]: No hostname configured, using default hostname. Apr 22 23:47:49.895921 systemd[1]: Hostname set to . Apr 22 23:47:49.895928 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:47:49.895936 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1077546374 wd_nsec: 1077546338 Apr 22 23:47:49.895943 systemd[1]: Queued start job for default target initrd.target. Apr 22 23:47:49.895950 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:47:49.895959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:47:49.895967 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:47:49.895975 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 22 23:47:49.895982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:47:49.895990 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 22 23:47:49.895999 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 22 23:47:49.896006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:47:49.896013 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:47:49.896020 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:47:49.896027 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:47:49.896034 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:47:49.896041 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:47:49.896050 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:47:49.896057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:47:49.896064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:47:49.896071 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:47:49.896078 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 22 23:47:49.896085 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 22 23:47:49.896092 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:47:49.896101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:47:49.896108 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:47:49.896115 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:47:49.896122 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 22 23:47:49.896130 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 22 23:47:49.896137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:47:49.896144 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 22 23:47:49.896153 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 22 23:47:49.896160 systemd[1]: Starting systemd-fsck-usr.service... Apr 22 23:47:49.896167 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:47:49.896174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:47:49.896183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:47:49.896190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:47:49.896197 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 22 23:47:49.897104 systemd-journald[317]: Collecting audit messages is enabled. Apr 22 23:47:49.897213 systemd[1]: Finished systemd-fsck-usr.service. Apr 22 23:47:49.897223 kernel: audit: type=1130 audit(1776901669.863:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.897234 systemd-journald[317]: Journal started Apr 22 23:47:49.897257 systemd-journald[317]: Runtime Journal (/run/log/journal/7635b1b405cf4c30891826f297c66e79) is 6M, max 48.1M, 42.1M free. Apr 22 23:47:49.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.923450 kernel: audit: type=1130 audit(1776901669.896:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.923524 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:47:49.938888 kernel: audit: type=1130 audit(1776901669.929:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:49.961933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 23:47:49.966687 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:47:50.675260 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 22 23:47:51.695201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 22 23:47:51.695392 kernel: Bridge firewalling registered Apr 22 23:47:51.695454 kernel: audit: type=1130 audit(1776901671.672:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:50.678449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:47:51.728233 kernel: audit: type=1130 audit(1776901671.709:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:50.877441 systemd-modules-load[320]: Inserted module 'br_netfilter' Apr 22 23:47:51.675072 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:47:51.782200 kernel: audit: type=1130 audit(1776901671.761:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.728213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:47:51.813219 kernel: audit: type=1130 audit(1776901671.789:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:51.781964 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:47:51.885260 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 23:47:51.913133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:47:51.926959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:47:52.001295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:47:52.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.019802 kernel: audit: type=1130 audit(1776901672.000:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.037483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:47:52.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.053000 audit: BPF prog-id=6 op=LOAD Apr 22 23:47:52.064024 kernel: audit: type=1130 audit(1776901672.038:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.064250 kernel: audit: type=1334 audit(1776901672.053:11): prog-id=6 op=LOAD Apr 22 23:47:52.065532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:47:52.097185 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:47:52.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.113433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 22 23:47:52.188458 dracut-cmdline[361]: dracut-109 Apr 22 23:47:52.198890 dracut-cmdline[361]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:47:52.383489 systemd-resolved[355]: Positive Trust Anchors: Apr 22 23:47:52.384484 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:47:52.384543 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:47:52.384679 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:47:52.621533 systemd-resolved[355]: Defaulting to hostname 'linux'. Apr 22 23:47:52.651322 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:47:52.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:52.666550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:47:53.452314 kernel: Loading iSCSI transport class v2.0-870. Apr 22 23:47:53.525084 kernel: iscsi: registered transport (tcp) Apr 22 23:47:53.672359 kernel: iscsi: registered transport (qla4xxx) Apr 22 23:47:53.673235 kernel: QLogic iSCSI HBA Driver Apr 22 23:47:54.119326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:47:54.310781 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:47:54.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:54.328356 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:47:55.076210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 22 23:47:55.139146 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 22 23:47:55.141244 kernel: audit: type=1130 audit(1776901675.086:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.093537 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 22 23:47:55.122227 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 22 23:47:55.375696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:47:55.416978 kernel: audit: type=1130 audit(1776901675.375:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.417004 kernel: audit: type=1334 audit(1776901675.391:17): prog-id=7 op=LOAD Apr 22 23:47:55.417080 kernel: audit: type=1334 audit(1776901675.391:18): prog-id=8 op=LOAD Apr 22 23:47:55.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.391000 audit: BPF prog-id=7 op=LOAD Apr 22 23:47:55.391000 audit: BPF prog-id=8 op=LOAD Apr 22 23:47:55.397502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:47:55.500022 systemd-udevd[583]: Using default interface naming scheme 'v257'. Apr 22 23:47:55.607288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:47:55.643911 kernel: audit: type=1130 audit(1776901675.620:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.631054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 22 23:47:55.724829 dracut-pre-trigger[615]: rd.md=0: removing MD RAID activation Apr 22 23:47:55.826238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:47:55.858242 kernel: audit: type=1130 audit(1776901675.825:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:55.830045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:47:56.054241 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:47:56.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.069365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:47:56.111123 kernel: audit: type=1130 audit(1776901676.067:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.111146 kernel: audit: type=1130 audit(1776901676.090:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.106940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 22 23:47:56.138355 kernel: audit: type=1334 audit(1776901676.126:23): prog-id=9 op=LOAD Apr 22 23:47:56.126000 audit: BPF prog-id=9 op=LOAD Apr 22 23:47:56.149941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:47:56.386684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 22 23:47:56.419177 systemd-networkd[744]: lo: Link UP Apr 22 23:47:56.419181 systemd-networkd[744]: lo: Gained carrier Apr 22 23:47:56.476026 kernel: audit: type=1130 audit(1776901676.448:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.427922 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 22 23:47:56.446351 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:47:56.482913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 22 23:47:56.495085 systemd[1]: Reached target network.target - Network. Apr 22 23:47:56.510844 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 22 23:47:56.567005 kernel: cryptd: max_cpu_qlen set to 1000 Apr 22 23:47:56.589394 disk-uuid[769]: Primary Header is updated. Apr 22 23:47:56.589394 disk-uuid[769]: Secondary Entries is updated. Apr 22 23:47:56.589394 disk-uuid[769]: Secondary Header is updated. Apr 22 23:47:56.594672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:47:56.623816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:47:56.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:56.623998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:47:56.635400 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:47:56.668436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:47:56.945451 kernel: AES CTR mode by8 optimization enabled Apr 22 23:47:56.958898 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:47:57.751909 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 22 23:47:56.958906 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:47:56.959906 systemd-networkd[744]: eth0: Link UP Apr 22 23:47:56.972549 systemd-networkd[744]: eth0: Gained carrier Apr 22 23:47:56.982967 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:47:57.017150 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:47:57.264294 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 22 23:47:57.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:57.804244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:47:57.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:57.825435 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:47:57.848256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:47:57.856028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:47:57.869501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 22 23:47:57.932515 disk-uuid[770]: Warning: The kernel is still using the old partition table. Apr 22 23:47:57.932515 disk-uuid[770]: The new table will be used at the next reboot or after you Apr 22 23:47:57.932515 disk-uuid[770]: run partprobe(8) or kpartx(8) Apr 22 23:47:57.932515 disk-uuid[770]: The operation has completed successfully. Apr 22 23:47:57.986460 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 22 23:47:57.986787 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 22 23:47:58.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:58.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:58.011453 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 22 23:47:58.018496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:47:58.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:58.167262 systemd-networkd[744]: eth0: Gained IPv6LL Apr 22 23:47:58.180443 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Apr 22 23:47:58.199488 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:47:58.203360 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:47:58.289993 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:47:58.290320 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:47:58.339837 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:47:58.353441 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 22 23:47:58.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:47:58.354935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 22 23:48:01.171828 ignition[909]: Ignition 2.24.0 Apr 22 23:48:01.173923 ignition[909]: Stage: fetch-offline Apr 22 23:48:01.189701 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:01.198830 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:01.213027 ignition[909]: parsed url from cmdline: "" Apr 22 23:48:01.214547 ignition[909]: no config URL provided Apr 22 23:48:01.293067 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 22 23:48:01.293326 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 22 23:48:01.295042 ignition[909]: op(1): [started] loading QEMU firmware config module Apr 22 23:48:01.295047 ignition[909]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 22 23:48:01.383281 ignition[909]: op(1): [finished] loading QEMU firmware config module Apr 22 23:48:01.768844 ignition[909]: parsing config with SHA512: c72f82de710c4aa4d99b5859e279ed49063595f92fca77236837e1279b61376bf66968450939cba9e215881af86bcdde72d61e9cdb6700521e9c1f6b5c9a8a07 Apr 22 23:48:01.887871 unknown[909]: fetched base config from "system" Apr 22 23:48:01.887927 unknown[909]: fetched user config from "qemu" Apr 22 23:48:01.888459 ignition[909]: fetch-offline: fetch-offline passed Apr 22 23:48:01.906444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:48:02.000431 kernel: kauditd_printk_skb: 7 callbacks suppressed Apr 22 23:48:02.000460 kernel: audit: type=1130 audit(1776901681.961:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:01.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:01.890516 ignition[909]: Ignition finished successfully Apr 22 23:48:01.985333 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 22 23:48:02.058155 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 22 23:48:03.364066 ignition[920]: Ignition 2.24.0 Apr 22 23:48:03.364153 ignition[920]: Stage: kargs Apr 22 23:48:03.364990 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:03.365002 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:03.370547 ignition[920]: kargs: kargs passed Apr 22 23:48:03.370807 ignition[920]: Ignition finished successfully Apr 22 23:48:03.408924 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 22 23:48:03.441305 kernel: audit: type=1130 audit(1776901683.415:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:03.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:03.419688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 22 23:48:03.828732 ignition[928]: Ignition 2.24.0 Apr 22 23:48:03.828830 ignition[928]: Stage: disks Apr 22 23:48:03.829398 ignition[928]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:03.829406 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:03.865138 ignition[928]: disks: disks passed Apr 22 23:48:03.865265 ignition[928]: Ignition finished successfully Apr 22 23:48:03.873854 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 22 23:48:03.902157 kernel: audit: type=1130 audit(1776901683.882:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:03.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:03.884008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 22 23:48:03.911505 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 22 23:48:03.945069 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:48:03.950942 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:48:03.977404 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:48:03.996893 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 22 23:48:04.570968 systemd-fsck[937]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 22 23:48:04.598830 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 22 23:48:04.628162 kernel: audit: type=1130 audit(1776901684.600:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:04.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:04.625430 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 22 23:48:05.384121 kernel: EXT4-fs (vda9): mounted filesystem acb26ad1-a3c4-45b5-95a2-dde9b0966d3b r/w with ordered data mode. Quota mode: none. Apr 22 23:48:05.385859 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 22 23:48:05.397175 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 22 23:48:05.401890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:48:05.414959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 22 23:48:05.446300 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 22 23:48:05.448196 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 22 23:48:05.448276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:48:05.611368 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (947) Apr 22 23:48:05.611405 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:05.611419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:05.467425 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 22 23:48:05.490468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 22 23:48:05.660134 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:48:05.660977 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:48:05.686194 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:48:06.995550 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 22 23:48:07.032682 kernel: audit: type=1130 audit(1776901687.009:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:07.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:07.069694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 22 23:48:07.089139 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 22 23:48:07.224704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 22 23:48:07.248345 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:07.425696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 22 23:48:07.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:07.460119 kernel: audit: type=1130 audit(1776901687.434:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:08.008493 ignition[1045]: INFO : Ignition 2.24.0 Apr 22 23:48:08.008493 ignition[1045]: INFO : Stage: mount Apr 22 23:48:08.023475 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:08.023475 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:08.060451 ignition[1045]: INFO : mount: mount passed Apr 22 23:48:08.066095 ignition[1045]: INFO : Ignition finished successfully Apr 22 23:48:08.075166 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 22 23:48:08.113930 kernel: audit: type=1130 audit(1776901688.087:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:08.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:08.179742 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 22 23:48:08.300308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:48:08.387115 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1057) Apr 22 23:48:08.387465 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:08.403448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:08.508977 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:48:08.511101 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:48:08.524478 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:48:08.881036 ignition[1073]: INFO : Ignition 2.24.0 Apr 22 23:48:08.881036 ignition[1073]: INFO : Stage: files Apr 22 23:48:08.895417 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:08.895417 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:08.895417 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Apr 22 23:48:08.918308 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 22 23:48:08.918308 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 22 23:48:08.946082 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 22 23:48:08.961860 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 22 23:48:08.971090 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 22 23:48:08.968330 unknown[1073]: wrote ssh authorized keys file for user: core Apr 22 23:48:09.016322 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:48:09.092468 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 22 23:48:09.311045 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 22 23:48:10.666124 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:48:10.666124 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 22 23:48:10.701277 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 22 23:48:10.701277 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:48:10.701277 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 22 23:48:10.758515 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 22 23:48:11.610312 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 22 23:48:17.817900 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 22 23:48:17.817900 ignition[1073]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 22 23:48:17.860492 ignition[1073]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 22 23:48:17.874393 ignition[1073]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 22 23:48:18.203215 ignition[1073]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:48:18.351482 ignition[1073]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:48:18.367733 ignition[1073]: INFO : files: files passed Apr 22 23:48:18.367733 ignition[1073]: INFO : Ignition finished successfully Apr 22 23:48:18.480418 kernel: audit: type=1130 audit(1776901698.429:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.402044 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 22 23:48:18.452519 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 22 23:48:18.491115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 22 23:48:18.543501 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 22 23:48:18.546277 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 22 23:48:18.605999 kernel: audit: type=1130 audit(1776901698.562:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.606099 kernel: audit: type=1131 audit(1776901698.562:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.676767 initrd-setup-root-after-ignition[1105]: grep: /sysroot/oem/oem-release: No such file or directory Apr 22 23:48:18.704945 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:18.716428 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:18.716428 initrd-setup-root-after-ignition[1107]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:18.750931 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:48:18.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.780906 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 22 23:48:18.807700 kernel: audit: type=1130 audit(1776901698.771:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.809769 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 22 23:48:19.113282 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 22 23:48:19.206131 kernel: audit: type=1130 audit(1776901699.128:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.206158 kernel: audit: type=1131 audit(1776901699.128:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.114016 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 22 23:48:19.185205 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 22 23:48:19.214996 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 22 23:48:19.240389 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 22 23:48:19.265421 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 22 23:48:19.363420 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:48:19.398459 kernel: audit: type=1130 audit(1776901699.371:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.384114 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 22 23:48:19.483219 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:48:19.485020 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:48:19.485490 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:48:19.509536 systemd[1]: Stopped target timers.target - Timer Units. Apr 22 23:48:19.723753 kernel: audit: type=1131 audit(1776901699.602:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.599411 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 22 23:48:19.746789 kernel: audit: type=1131 audit(1776901699.723:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.602225 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:48:19.626163 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 22 23:48:19.634369 systemd[1]: Stopped target basic.target - Basic System. Apr 22 23:48:19.642758 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 22 23:48:19.644800 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:48:19.829435 kernel: audit: type=1131 audit(1776901699.805:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.646297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 22 23:48:19.659482 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:48:19.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.664706 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 22 23:48:19.676075 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:48:19.688695 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 22 23:48:19.695389 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 22 23:48:19.707519 systemd[1]: Stopped target swap.target - Swaps. Apr 22 23:48:19.710378 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 22 23:48:19.715052 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:48:19.747437 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:48:19.760382 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:48:20.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.774083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 22 23:48:20.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.775463 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:48:19.790128 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 22 23:48:19.791295 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 22 23:48:19.830545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 22 23:48:19.831022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:48:19.855133 systemd[1]: Stopped target paths.target - Path Units. Apr 22 23:48:20.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.867351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 22 23:48:19.872773 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:48:20.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.888343 systemd[1]: Stopped target slices.target - Slice Units. Apr 22 23:48:19.904058 systemd[1]: Stopped target sockets.target - Socket Units. Apr 22 23:48:19.917231 systemd[1]: iscsid.socket: Deactivated successfully. Apr 22 23:48:19.917890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:48:19.918086 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 22 23:48:19.918275 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:48:20.000177 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 22 23:48:20.003370 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:48:20.007914 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 22 23:48:20.008207 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:48:20.036158 systemd[1]: ignition-files.service: Deactivated successfully. Apr 22 23:48:20.041220 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 22 23:48:20.063973 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 22 23:48:20.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.376706 ignition[1131]: INFO : Ignition 2.24.0 Apr 22 23:48:20.376706 ignition[1131]: INFO : Stage: umount Apr 22 23:48:20.376706 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:20.376706 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:20.095064 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 22 23:48:20.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.427214 ignition[1131]: INFO : umount: umount passed Apr 22 23:48:20.427214 ignition[1131]: INFO : Ignition finished successfully Apr 22 23:48:20.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.108497 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 22 23:48:20.108975 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:48:20.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.125226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 22 23:48:20.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.125333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:48:20.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.155276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 22 23:48:20.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.155746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:48:20.301707 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 22 23:48:20.301908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 22 23:48:20.396778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 22 23:48:20.404951 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 22 23:48:20.405140 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 22 23:48:20.422424 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 22 23:48:20.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.423799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 22 23:48:20.448718 systemd[1]: Stopped target network.target - Network. Apr 22 23:48:20.465986 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 22 23:48:20.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.468423 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 22 23:48:20.709000 audit: BPF prog-id=9 op=UNLOAD Apr 22 23:48:20.711000 audit: BPF prog-id=6 op=UNLOAD Apr 22 23:48:20.477161 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 22 23:48:20.483336 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 22 23:48:20.497728 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 22 23:48:20.497917 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 22 23:48:20.519463 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 22 23:48:20.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.521363 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 22 23:48:20.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.532444 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 22 23:48:20.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.532539 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 22 23:48:20.561339 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 22 23:48:20.574440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 22 23:48:20.653544 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 22 23:48:20.655922 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 22 23:48:20.683466 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 22 23:48:20.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.684497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 22 23:48:20.712335 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 22 23:48:20.724253 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 22 23:48:20.724335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:48:20.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.745677 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 22 23:48:20.752538 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 22 23:48:21.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.752747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:48:20.768962 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 22 23:48:21.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.769044 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:48:20.784534 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 22 23:48:20.784790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 22 23:48:20.799113 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:48:20.859135 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 22 23:48:20.869527 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:48:21.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.902993 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 22 23:48:20.904478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 22 23:48:21.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.953001 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 22 23:48:21.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.954184 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:48:21.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.968133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 22 23:48:20.968219 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:48:20.995310 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 22 23:48:20.997524 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 22 23:48:21.014390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 22 23:48:21.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.014703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:48:21.060396 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 22 23:48:21.078242 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 22 23:48:21.078380 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:48:21.111509 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 22 23:48:21.113760 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:48:21.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.128450 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 22 23:48:21.128704 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:48:21.147329 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 22 23:48:21.147424 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:48:21.164922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:48:21.165041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:21.198139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 22 23:48:21.198310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 22 23:48:21.310530 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 22 23:48:21.311967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 22 23:48:21.323974 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 22 23:48:21.348438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 22 23:48:21.428404 systemd[1]: Switching root. Apr 22 23:48:21.494542 systemd-journald[317]: Journal stopped Apr 22 23:48:27.493064 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Apr 22 23:48:27.493123 kernel: SELinux: policy capability network_peer_controls=1 Apr 22 23:48:27.493142 kernel: SELinux: policy capability open_perms=1 Apr 22 23:48:27.493151 kernel: SELinux: policy capability extended_socket_class=1 Apr 22 23:48:27.493163 kernel: SELinux: policy capability always_check_network=0 Apr 22 23:48:27.493172 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 22 23:48:27.493183 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 22 23:48:27.493191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 22 23:48:27.493199 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 22 23:48:27.493210 kernel: SELinux: policy capability userspace_initial_context=0 Apr 22 23:48:27.493219 systemd[1]: Successfully loaded SELinux policy in 223.297ms. Apr 22 23:48:27.493236 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 46.087ms. Apr 22 23:48:27.493246 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:48:27.493255 systemd[1]: Detected virtualization kvm. Apr 22 23:48:27.493263 systemd[1]: Detected architecture x86-64. Apr 22 23:48:27.493271 systemd[1]: Detected first boot. Apr 22 23:48:27.493281 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:48:27.493293 zram_generator::config[1175]: No configuration found. Apr 22 23:48:27.493305 kernel: Guest personality initialized and is inactive Apr 22 23:48:27.493313 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 22 23:48:27.493321 kernel: Initialized host personality Apr 22 23:48:27.493330 kernel: NET: Registered PF_VSOCK protocol family Apr 22 23:48:27.493339 systemd[1]: Populated /etc with preset unit settings. Apr 22 23:48:27.493349 kernel: kauditd_printk_skb: 40 callbacks suppressed Apr 22 23:48:27.493357 kernel: audit: type=1334 audit(1776901705.287:89): prog-id=12 op=LOAD Apr 22 23:48:27.493366 kernel: audit: type=1334 audit(1776901705.289:90): prog-id=3 op=UNLOAD Apr 22 23:48:27.493375 kernel: audit: type=1334 audit(1776901705.291:91): prog-id=13 op=LOAD Apr 22 23:48:27.493383 kernel: audit: type=1334 audit(1776901705.291:92): prog-id=14 op=LOAD Apr 22 23:48:27.493391 kernel: audit: type=1334 audit(1776901705.291:93): prog-id=4 op=UNLOAD Apr 22 23:48:27.493399 kernel: audit: type=1334 audit(1776901705.291:94): prog-id=5 op=UNLOAD Apr 22 23:48:27.493408 kernel: audit: type=1131 audit(1776901705.299:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.493417 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 22 23:48:27.493426 kernel: audit: type=1334 audit(1776901705.354:96): prog-id=12 op=UNLOAD Apr 22 23:48:27.493434 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 22 23:48:27.493442 kernel: audit: type=1130 audit(1776901705.379:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.493451 kernel: audit: type=1131 audit(1776901705.381:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.493461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 22 23:48:27.493474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 22 23:48:27.493482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 22 23:48:27.493494 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 22 23:48:27.493503 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 22 23:48:27.493511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 22 23:48:27.493522 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 22 23:48:27.493530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 22 23:48:27.493539 systemd[1]: Created slice user.slice - User and Session Slice. Apr 22 23:48:27.493548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:48:27.494027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:48:27.494253 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 22 23:48:27.494263 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 22 23:48:27.494282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 22 23:48:27.494291 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:48:27.494300 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 22 23:48:27.494309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:48:27.494318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:48:27.494326 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 22 23:48:27.494335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 22 23:48:27.494345 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 22 23:48:27.494354 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 22 23:48:27.494362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:48:27.494371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:48:27.494380 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 22 23:48:27.494388 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:48:27.494397 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:48:27.494408 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 22 23:48:27.494417 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 22 23:48:27.494425 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 22 23:48:27.494434 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:48:27.494443 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 22 23:48:27.494451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:48:27.494461 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 22 23:48:27.494471 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 22 23:48:27.494484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:48:27.494492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:48:27.494500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 22 23:48:27.494509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 22 23:48:27.494518 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 22 23:48:27.494526 systemd[1]: Mounting media.mount - External Media Directory... Apr 22 23:48:27.494534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:27.494544 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 22 23:48:27.494553 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 22 23:48:27.494698 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 22 23:48:27.494709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 22 23:48:27.494718 systemd[1]: Reached target machines.target - Containers. Apr 22 23:48:27.494727 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 22 23:48:27.494739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:48:27.494749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:48:27.494758 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 22 23:48:27.494766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:48:27.494774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:48:27.494783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:48:27.494792 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 22 23:48:27.494802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:48:27.494812 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 22 23:48:27.494821 kernel: ACPI: bus type drm_connector registered Apr 22 23:48:27.494829 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 22 23:48:27.494838 kernel: fuse: init (API version 7.41) Apr 22 23:48:27.494846 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 22 23:48:27.494855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 22 23:48:27.495846 systemd[1]: Stopped systemd-fsck-usr.service. Apr 22 23:48:27.495979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:48:27.495989 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:48:27.496018 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:48:27.496046 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:48:27.497106 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 22 23:48:27.497236 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 22 23:48:27.497350 systemd-journald[1261]: Collecting audit messages is enabled. Apr 22 23:48:27.497432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:48:27.497444 systemd-journald[1261]: Journal started Apr 22 23:48:27.497518 systemd-journald[1261]: Runtime Journal (/run/log/journal/7635b1b405cf4c30891826f297c66e79) is 6M, max 48.1M, 42.1M free. Apr 22 23:48:26.110000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 22 23:48:27.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.231000 audit: BPF prog-id=14 op=UNLOAD Apr 22 23:48:27.231000 audit: BPF prog-id=13 op=UNLOAD Apr 22 23:48:27.249000 audit: BPF prog-id=15 op=LOAD Apr 22 23:48:27.251000 audit: BPF prog-id=16 op=LOAD Apr 22 23:48:27.252000 audit: BPF prog-id=17 op=LOAD Apr 22 23:48:27.487000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 22 23:48:27.487000 audit[1261]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe8ca648d0 a2=4000 a3=0 items=0 ppid=1 pid=1261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:48:27.487000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 22 23:48:25.265232 systemd[1]: Queued start job for default target multi-user.target. Apr 22 23:48:25.295461 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 22 23:48:25.298189 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 22 23:48:25.302253 systemd[1]: systemd-journald.service: Consumed 3.313s CPU time. Apr 22 23:48:27.527010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:27.547149 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:48:27.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.553839 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 22 23:48:27.564090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 22 23:48:27.577394 systemd[1]: Mounted media.mount - External Media Directory. Apr 22 23:48:27.587025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 22 23:48:27.596481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 22 23:48:27.614066 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 22 23:48:27.627742 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 22 23:48:27.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.643337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:48:27.659290 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 22 23:48:27.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.659756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 22 23:48:27.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.672141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:48:27.672482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:48:27.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.687243 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:48:27.687540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:48:27.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.700523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:48:27.701531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:48:27.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.773520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 22 23:48:27.777527 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 22 23:48:27.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.794264 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:48:27.797461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:48:27.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.815702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:48:27.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.832224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:48:27.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.853239 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 22 23:48:27.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.883989 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 22 23:48:27.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.903180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:48:27.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:27.956742 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:48:27.976458 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 22 23:48:28.006398 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 22 23:48:28.022385 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 22 23:48:28.033128 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 22 23:48:28.037363 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:48:28.061072 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 22 23:48:28.076273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:48:28.076468 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:48:28.086960 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 22 23:48:28.107437 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 22 23:48:28.153377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:48:28.184743 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 22 23:48:28.195480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:48:28.201355 systemd-journald[1261]: Time spent on flushing to /var/log/journal/7635b1b405cf4c30891826f297c66e79 is 32.259ms for 1136 entries. Apr 22 23:48:28.201355 systemd-journald[1261]: System Journal (/var/log/journal/7635b1b405cf4c30891826f297c66e79) is 8M, max 163.5M, 155.5M free. Apr 22 23:48:28.256437 systemd-journald[1261]: Received client request to flush runtime journal. Apr 22 23:48:28.204239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:48:28.234403 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 22 23:48:28.248006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 23:48:28.260176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 22 23:48:28.270216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 22 23:48:28.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.287397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 22 23:48:28.308062 kernel: loop1: detected capacity change from 0 to 217752 Apr 22 23:48:28.309501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 22 23:48:28.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.326123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:48:28.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.346290 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 22 23:48:28.361135 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 22 23:48:28.381262 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 22 23:48:28.381280 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 22 23:48:28.396037 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:48:28.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.407838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 22 23:48:28.471334 kernel: loop2: detected capacity change from 0 to 111560 Apr 22 23:48:28.523034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 22 23:48:28.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.545465 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 22 23:48:28.567708 kernel: loop3: detected capacity change from 0 to 50784 Apr 22 23:48:28.599021 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 22 23:48:28.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.611000 audit: BPF prog-id=18 op=LOAD Apr 22 23:48:28.613000 audit: BPF prog-id=19 op=LOAD Apr 22 23:48:28.614000 audit: BPF prog-id=20 op=LOAD Apr 22 23:48:28.616424 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 22 23:48:28.625000 audit: BPF prog-id=21 op=LOAD Apr 22 23:48:28.635301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:48:28.653694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:48:28.657229 kernel: loop4: detected capacity change from 0 to 217752 Apr 22 23:48:28.670000 audit: BPF prog-id=22 op=LOAD Apr 22 23:48:28.681000 audit: BPF prog-id=23 op=LOAD Apr 22 23:48:28.681000 audit: BPF prog-id=24 op=LOAD Apr 22 23:48:28.684120 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 22 23:48:28.696000 audit: BPF prog-id=25 op=LOAD Apr 22 23:48:28.697000 audit: BPF prog-id=26 op=LOAD Apr 22 23:48:28.697000 audit: BPF prog-id=27 op=LOAD Apr 22 23:48:28.703158 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 22 23:48:28.801763 kernel: loop5: detected capacity change from 0 to 111560 Apr 22 23:48:28.811080 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Apr 22 23:48:28.811098 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Apr 22 23:48:28.823497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:48:28.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.863743 kernel: loop6: detected capacity change from 0 to 50784 Apr 22 23:48:28.894267 systemd-nsresourced[1322]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 22 23:48:28.895336 (sd-merge)[1321]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Apr 22 23:48:28.899232 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 22 23:48:28.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.919118 (sd-merge)[1321]: Merged extensions into '/usr'. Apr 22 23:48:28.944407 systemd[1]: Reload requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Apr 22 23:48:28.945243 systemd[1]: Reloading... Apr 22 23:48:29.129849 zram_generator::config[1363]: No configuration found. Apr 22 23:48:29.157145 systemd-oomd[1318]: No swap; memory pressure usage will be degraded Apr 22 23:48:29.162508 systemd-resolved[1319]: Positive Trust Anchors: Apr 22 23:48:29.162518 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:48:29.162521 systemd-resolved[1319]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:48:29.162546 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:48:29.169401 systemd-resolved[1319]: Defaulting to hostname 'linux'. Apr 22 23:48:30.067142 systemd[1]: Reloading finished in 1121 ms. Apr 22 23:48:30.187852 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 22 23:48:30.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.255457 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 22 23:48:30.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.284822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:48:30.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.309423 kernel: kauditd_printk_skb: 51 callbacks suppressed Apr 22 23:48:30.309957 kernel: audit: type=1130 audit(1776901710.302:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.310200 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 22 23:48:30.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.348305 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 22 23:48:30.368077 kernel: audit: type=1130 audit(1776901710.344:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.402423 kernel: audit: type=1130 audit(1776901710.381:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.419433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:48:30.454057 systemd[1]: Starting ensure-sysext.service... Apr 22 23:48:30.464736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:48:30.473000 audit: BPF prog-id=8 op=UNLOAD Apr 22 23:48:30.473000 audit: BPF prog-id=7 op=UNLOAD Apr 22 23:48:30.487844 kernel: audit: type=1334 audit(1776901710.473:151): prog-id=8 op=UNLOAD Apr 22 23:48:30.488122 kernel: audit: type=1334 audit(1776901710.473:152): prog-id=7 op=UNLOAD Apr 22 23:48:30.489000 audit: BPF prog-id=28 op=LOAD Apr 22 23:48:30.491768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:48:30.489000 audit: BPF prog-id=29 op=LOAD Apr 22 23:48:30.497025 kernel: audit: type=1334 audit(1776901710.489:153): prog-id=28 op=LOAD Apr 22 23:48:30.497047 kernel: audit: type=1334 audit(1776901710.489:154): prog-id=29 op=LOAD Apr 22 23:48:30.578507 kernel: audit: type=1334 audit(1776901710.505:155): prog-id=30 op=LOAD Apr 22 23:48:30.580215 kernel: audit: type=1334 audit(1776901710.505:156): prog-id=21 op=UNLOAD Apr 22 23:48:30.505000 audit: BPF prog-id=30 op=LOAD Apr 22 23:48:30.505000 audit: BPF prog-id=21 op=UNLOAD Apr 22 23:48:30.509000 audit: BPF prog-id=31 op=LOAD Apr 22 23:48:30.511000 audit: BPF prog-id=15 op=UNLOAD Apr 22 23:48:30.514000 audit: BPF prog-id=32 op=LOAD Apr 22 23:48:30.514000 audit: BPF prog-id=33 op=LOAD Apr 22 23:48:30.514000 audit: BPF prog-id=16 op=UNLOAD Apr 22 23:48:30.514000 audit: BPF prog-id=17 op=UNLOAD Apr 22 23:48:30.580000 audit: BPF prog-id=34 op=LOAD Apr 22 23:48:30.580000 audit: BPF prog-id=18 op=UNLOAD Apr 22 23:48:30.581000 audit: BPF prog-id=35 op=LOAD Apr 22 23:48:30.581000 audit: BPF prog-id=36 op=LOAD Apr 22 23:48:30.581000 audit: BPF prog-id=19 op=UNLOAD Apr 22 23:48:30.581000 audit: BPF prog-id=20 op=UNLOAD Apr 22 23:48:30.585000 audit: BPF prog-id=37 op=LOAD Apr 22 23:48:30.586000 audit: BPF prog-id=22 op=UNLOAD Apr 22 23:48:30.589000 audit: BPF prog-id=38 op=LOAD Apr 22 23:48:30.589000 audit: BPF prog-id=39 op=LOAD Apr 22 23:48:30.589000 audit: BPF prog-id=23 op=UNLOAD Apr 22 23:48:30.589000 audit: BPF prog-id=24 op=UNLOAD Apr 22 23:48:30.590737 kernel: audit: type=1334 audit(1776901710.509:157): prog-id=31 op=LOAD Apr 22 23:48:30.591000 audit: BPF prog-id=40 op=LOAD Apr 22 23:48:30.591000 audit: BPF prog-id=25 op=UNLOAD Apr 22 23:48:30.591000 audit: BPF prog-id=41 op=LOAD Apr 22 23:48:30.591000 audit: BPF prog-id=42 op=LOAD Apr 22 23:48:30.592000 audit: BPF prog-id=26 op=UNLOAD Apr 22 23:48:30.592000 audit: BPF prog-id=27 op=UNLOAD Apr 22 23:48:30.613387 systemd[1]: Reload requested from client PID 1404 ('systemctl') (unit ensure-sysext.service)... Apr 22 23:48:30.613730 systemd[1]: Reloading... Apr 22 23:48:30.629014 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 22 23:48:30.629112 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 22 23:48:30.629355 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 22 23:48:30.630722 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 22 23:48:30.631427 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 22 23:48:30.646971 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:48:30.646982 systemd-tmpfiles[1405]: Skipping /boot Apr 22 23:48:30.648404 systemd-udevd[1406]: Using default interface naming scheme 'v257'. Apr 22 23:48:30.662497 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:48:30.662784 systemd-tmpfiles[1405]: Skipping /boot Apr 22 23:48:30.883843 zram_generator::config[1456]: No configuration found. Apr 22 23:48:31.109697 kernel: mousedev: PS/2 mouse device common for all mice Apr 22 23:48:31.137149 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 22 23:48:31.160286 kernel: ACPI: button: Power Button [PWRF] Apr 22 23:48:31.211369 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 22 23:48:31.260050 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 22 23:48:32.993379 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 22 23:48:32.995117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:48:33.007337 systemd[1]: Reloading finished in 2393 ms. Apr 22 23:48:33.065283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:48:33.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:33.096000 audit: BPF prog-id=43 op=LOAD Apr 22 23:48:33.096000 audit: BPF prog-id=34 op=UNLOAD Apr 22 23:48:33.096000 audit: BPF prog-id=44 op=LOAD Apr 22 23:48:33.096000 audit: BPF prog-id=45 op=LOAD Apr 22 23:48:33.096000 audit: BPF prog-id=35 op=UNLOAD Apr 22 23:48:33.096000 audit: BPF prog-id=36 op=UNLOAD Apr 22 23:48:33.097000 audit: BPF prog-id=46 op=LOAD Apr 22 23:48:33.097000 audit: BPF prog-id=37 op=UNLOAD Apr 22 23:48:33.097000 audit: BPF prog-id=47 op=LOAD Apr 22 23:48:33.097000 audit: BPF prog-id=48 op=LOAD Apr 22 23:48:33.097000 audit: BPF prog-id=38 op=UNLOAD Apr 22 23:48:33.097000 audit: BPF prog-id=39 op=UNLOAD Apr 22 23:48:33.098000 audit: BPF prog-id=49 op=LOAD Apr 22 23:48:33.098000 audit: BPF prog-id=40 op=UNLOAD Apr 22 23:48:33.098000 audit: BPF prog-id=50 op=LOAD Apr 22 23:48:33.099000 audit: BPF prog-id=51 op=LOAD Apr 22 23:48:33.100000 audit: BPF prog-id=41 op=UNLOAD Apr 22 23:48:33.100000 audit: BPF prog-id=42 op=UNLOAD Apr 22 23:48:33.101000 audit: BPF prog-id=52 op=LOAD Apr 22 23:48:33.104000 audit: BPF prog-id=53 op=LOAD Apr 22 23:48:33.104000 audit: BPF prog-id=28 op=UNLOAD Apr 22 23:48:33.104000 audit: BPF prog-id=29 op=UNLOAD Apr 22 23:48:33.112000 audit: BPF prog-id=54 op=LOAD Apr 22 23:48:33.112000 audit: BPF prog-id=31 op=UNLOAD Apr 22 23:48:33.112000 audit: BPF prog-id=55 op=LOAD Apr 22 23:48:33.112000 audit: BPF prog-id=56 op=LOAD Apr 22 23:48:33.112000 audit: BPF prog-id=32 op=UNLOAD Apr 22 23:48:33.112000 audit: BPF prog-id=33 op=UNLOAD Apr 22 23:48:33.114000 audit: BPF prog-id=57 op=LOAD Apr 22 23:48:33.116000 audit: BPF prog-id=30 op=UNLOAD Apr 22 23:48:33.221439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:48:33.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:33.513180 systemd[1]: Finished ensure-sysext.service. Apr 22 23:48:33.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:33.647194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:33.654981 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 22 23:48:33.704082 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 22 23:48:33.717552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:48:33.720458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:48:33.786412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:48:33.806005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:48:33.823078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:48:33.832727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:48:33.832841 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:48:33.854151 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 22 23:48:33.867296 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 22 23:48:33.881484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:48:33.902264 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 22 23:48:33.920000 audit: BPF prog-id=58 op=LOAD Apr 22 23:48:33.923676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:48:33.928000 audit: BPF prog-id=59 op=LOAD Apr 22 23:48:33.933996 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 22 23:48:33.966803 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 22 23:48:33.992421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:34.002985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:34.013454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:48:34.027232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:48:34.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:34.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:34.043868 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:48:34.044261 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:48:34.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:34.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:34.056000 audit[1548]: SYSTEM_BOOT pid=1548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 22 23:48:34.061355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:48:34.064000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 22 23:48:34.064000 audit[1553]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaf6ada60 a2=420 a3=0 items=0 ppid=1519 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:48:34.064000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 22 23:48:34.064411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:48:34.070174 augenrules[1553]: No rules Apr 22 23:48:34.085536 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:48:34.087279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:48:34.101323 systemd[1]: audit-rules.service: Deactivated successfully. Apr 22 23:48:34.102242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 22 23:48:34.103277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 22 23:48:34.159330 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 22 23:48:34.188352 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:48:34.188863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:48:34.215493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 22 23:48:34.229211 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 22 23:48:34.237432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 22 23:48:34.472849 systemd-networkd[1544]: lo: Link UP Apr 22 23:48:34.472857 systemd-networkd[1544]: lo: Gained carrier Apr 22 23:48:34.474538 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 22 23:48:34.476077 systemd-networkd[1544]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:34.476146 systemd-networkd[1544]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:48:34.483707 systemd-networkd[1544]: eth0: Link UP Apr 22 23:48:34.486114 systemd-networkd[1544]: eth0: Gained carrier Apr 22 23:48:34.486140 systemd-networkd[1544]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:34.536335 systemd-networkd[1544]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:48:34.553723 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Apr 22 23:48:34.557873 systemd-timesyncd[1546]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 22 23:48:34.559229 systemd-timesyncd[1546]: Initial clock synchronization to Wed 2026-04-22 23:48:34.349829 UTC. Apr 22 23:48:35.351194 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:48:35.407385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:35.451982 systemd[1]: Reached target network.target - Network. Apr 22 23:48:35.465252 systemd[1]: Reached target time-set.target - System Time Set. Apr 22 23:48:35.481815 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 22 23:48:35.501337 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 22 23:48:35.664505 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 22 23:48:35.895775 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 22 23:48:35.951448 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 22 23:48:36.001363 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 22 23:48:36.130820 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 22 23:48:36.165914 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:48:36.251954 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 22 23:48:36.282535 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 22 23:48:36.298913 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 22 23:48:36.315396 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 22 23:48:36.327283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 22 23:48:36.342510 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 22 23:48:36.358184 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 22 23:48:36.370863 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 22 23:48:36.385152 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 22 23:48:36.389067 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:48:36.404041 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:48:36.449098 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 22 23:48:36.469466 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 22 23:48:36.528474 systemd-networkd[1544]: eth0: Gained IPv6LL Apr 22 23:48:36.549173 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 22 23:48:36.571525 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 22 23:48:36.586304 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 22 23:48:36.628208 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 22 23:48:36.648128 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 22 23:48:36.670469 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 22 23:48:36.682223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 22 23:48:36.699839 systemd[1]: Reached target network-online.target - Network is Online. Apr 22 23:48:36.711316 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:48:36.720845 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:48:36.731527 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:48:36.740904 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:48:36.750191 systemd[1]: Starting containerd.service - containerd container runtime... Apr 22 23:48:36.799933 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 22 23:48:36.816410 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 22 23:48:36.857252 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 22 23:48:36.925503 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 22 23:48:36.950349 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 22 23:48:36.961131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 22 23:48:36.971992 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 22 23:48:36.985500 jq[1589]: false Apr 22 23:48:37.001428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:37.021101 extend-filesystems[1590]: Found /dev/vda6 Apr 22 23:48:37.034876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 22 23:48:37.052300 extend-filesystems[1590]: Found /dev/vda9 Apr 22 23:48:37.064000 extend-filesystems[1590]: Checking size of /dev/vda9 Apr 22 23:48:37.062173 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 22 23:48:37.075855 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 22 23:48:37.088780 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Apr 22 23:48:37.088038 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Apr 22 23:48:37.113405 extend-filesystems[1590]: Resized partition /dev/vda9 Apr 22 23:48:37.112284 oslogin_cache_refresh[1591]: Failure getting users, quitting Apr 22 23:48:37.140111 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Apr 22 23:48:37.140111 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:48:37.140111 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Apr 22 23:48:37.115142 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 22 23:48:37.112302 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:48:37.112348 oslogin_cache_refresh[1591]: Refreshing group entry cache Apr 22 23:48:37.148107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 22 23:48:37.157907 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Apr 22 23:48:37.208520 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Apr 22 23:48:37.208520 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:48:37.162859 oslogin_cache_refresh[1591]: Failure getting groups, quitting Apr 22 23:48:37.162871 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:48:37.227776 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 22 23:48:37.246312 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 22 23:48:37.244816 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 22 23:48:37.248121 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 22 23:48:37.259961 systemd[1]: Starting update-engine.service - Update Engine... Apr 22 23:48:37.272735 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 22 23:48:37.346461 jq[1623]: true Apr 22 23:48:37.332464 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 22 23:48:37.348736 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 22 23:48:37.351332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 22 23:48:37.351938 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 22 23:48:37.352228 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 22 23:48:37.389527 systemd[1]: motdgen.service: Deactivated successfully. Apr 22 23:48:37.391157 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 22 23:48:37.412445 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 22 23:48:37.457490 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 22 23:48:37.448840 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 22 23:48:37.449518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 22 23:48:37.544718 update_engine[1619]: I20260422 23:48:37.509763 1619 main.cc:92] Flatcar Update Engine starting Apr 22 23:48:37.563323 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 22 23:48:37.563323 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 22 23:48:37.563323 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 22 23:48:37.583778 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 22 23:48:37.620906 extend-filesystems[1590]: Resized filesystem in /dev/vda9 Apr 22 23:48:37.631954 jq[1642]: true Apr 22 23:48:37.584402 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 22 23:48:37.658149 tar[1640]: linux-amd64/LICENSE Apr 22 23:48:37.658149 tar[1640]: linux-amd64/helm Apr 22 23:48:37.677779 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 22 23:48:37.678997 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 22 23:48:37.704011 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 22 23:48:37.808953 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Apr 22 23:48:37.816214 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 22 23:48:37.833424 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 22 23:48:37.846081 systemd-logind[1614]: Watching system buttons on /dev/input/event2 (Power Button) Apr 22 23:48:37.846107 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 22 23:48:37.846322 systemd-logind[1614]: New seat seat0. Apr 22 23:48:37.900507 systemd[1]: Started systemd-logind.service - User Login Management. Apr 22 23:48:37.915266 dbus-daemon[1587]: [system] SELinux support is enabled Apr 22 23:48:37.916537 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 22 23:48:37.928083 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 22 23:48:37.928310 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 22 23:48:37.939857 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 22 23:48:37.940106 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 22 23:48:37.984792 systemd[1]: Started update-engine.service - Update Engine. Apr 22 23:48:37.985856 update_engine[1619]: I20260422 23:48:37.985409 1619 update_check_scheduler.cc:74] Next update check in 11m7s Apr 22 23:48:37.995124 dbus-daemon[1587]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 22 23:48:37.998399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 22 23:48:38.099761 sshd_keygen[1622]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 22 23:48:38.252413 locksmithd[1689]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 22 23:48:38.336189 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 22 23:48:38.346213 containerd[1644]: time="2026-04-22T23:48:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 22 23:48:38.351923 containerd[1644]: time="2026-04-22T23:48:38.351698177Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Apr 22 23:48:38.378931 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 22 23:48:38.391869 containerd[1644]: time="2026-04-22T23:48:38.391797541Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="77.613µs" Apr 22 23:48:38.392374 containerd[1644]: time="2026-04-22T23:48:38.392355028Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 22 23:48:38.392913 containerd[1644]: time="2026-04-22T23:48:38.392897719Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 22 23:48:38.393159 containerd[1644]: time="2026-04-22T23:48:38.393148209Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 22 23:48:38.395347 containerd[1644]: time="2026-04-22T23:48:38.394311074Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 22 23:48:38.395946 containerd[1644]: time="2026-04-22T23:48:38.395924375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396101 containerd[1644]: time="2026-04-22T23:48:38.396084414Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396153 containerd[1644]: time="2026-04-22T23:48:38.396143491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396407 containerd[1644]: time="2026-04-22T23:48:38.396388000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396461 containerd[1644]: time="2026-04-22T23:48:38.396451795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396504 containerd[1644]: time="2026-04-22T23:48:38.396494843Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:48:38.396542 containerd[1644]: time="2026-04-22T23:48:38.396534590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397121 containerd[1644]: time="2026-04-22T23:48:38.397101755Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397183 containerd[1644]: time="2026-04-22T23:48:38.397173294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397291 containerd[1644]: time="2026-04-22T23:48:38.397280807Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397502 containerd[1644]: time="2026-04-22T23:48:38.397487981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397863 containerd[1644]: time="2026-04-22T23:48:38.397845648Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:48:38.397917 containerd[1644]: time="2026-04-22T23:48:38.397907160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 22 23:48:38.397986 containerd[1644]: time="2026-04-22T23:48:38.397975924Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 22 23:48:38.398265 containerd[1644]: time="2026-04-22T23:48:38.398249107Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 22 23:48:38.398399 containerd[1644]: time="2026-04-22T23:48:38.398386481Z" level=info msg="metadata content store policy set" policy=shared Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423416096Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423678731Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423786796Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423798234Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423813136Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423850318Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423861153Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423870169Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423881510Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423892645Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423901974Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423913308Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423922687Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 22 23:48:38.424789 containerd[1644]: time="2026-04-22T23:48:38.423933805Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.424091840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.424117550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.424136654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.424527677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428772704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428817239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428838511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428852802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428865663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428877992Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 22 23:48:38.430516 containerd[1644]: time="2026-04-22T23:48:38.428889223Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 22 23:48:38.441190 containerd[1644]: time="2026-04-22T23:48:38.440103356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 22 23:48:38.447286 containerd[1644]: time="2026-04-22T23:48:38.446299110Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 22 23:48:38.448262 containerd[1644]: time="2026-04-22T23:48:38.448244007Z" level=info msg="Start snapshots syncer" Apr 22 23:48:38.448393 containerd[1644]: time="2026-04-22T23:48:38.448377888Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 22 23:48:38.449184 containerd[1644]: time="2026-04-22T23:48:38.449136334Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 22 23:48:38.450223 containerd[1644]: time="2026-04-22T23:48:38.450203082Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 22 23:48:38.450339 containerd[1644]: time="2026-04-22T23:48:38.450322333Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 22 23:48:38.452018 containerd[1644]: time="2026-04-22T23:48:38.451814179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 22 23:48:38.465323 containerd[1644]: time="2026-04-22T23:48:38.464938516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467004692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467036038Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467051089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467064896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467078494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467092239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467107606Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467258551Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467278633Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467287792Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467300166Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467308629Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467320152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 22 23:48:38.468924 containerd[1644]: time="2026-04-22T23:48:38.467331177Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 22 23:48:38.469261 containerd[1644]: time="2026-04-22T23:48:38.467354711Z" level=info msg="runtime interface created" Apr 22 23:48:38.469261 containerd[1644]: time="2026-04-22T23:48:38.467360820Z" level=info msg="created NRI interface" Apr 22 23:48:38.469261 containerd[1644]: time="2026-04-22T23:48:38.467369353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 22 23:48:38.469261 containerd[1644]: time="2026-04-22T23:48:38.467460384Z" level=info msg="Connect containerd service" Apr 22 23:48:38.469261 containerd[1644]: time="2026-04-22T23:48:38.467489053Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 22 23:48:38.479155 containerd[1644]: time="2026-04-22T23:48:38.479087155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 22 23:48:38.479883 systemd[1]: issuegen.service: Deactivated successfully. Apr 22 23:48:38.483081 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 22 23:48:38.515218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 22 23:48:38.582028 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 22 23:48:38.615906 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 22 23:48:38.689430 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 22 23:48:38.701403 systemd[1]: Reached target getty.target - Login Prompts. Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.811943190Z" level=info msg="Start subscribing containerd event" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.812394288Z" level=info msg="Start recovering state" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.812976081Z" level=info msg="Start event monitor" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.812996099Z" level=info msg="Start cni network conf syncer for default" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.813005270Z" level=info msg="Start streaming server" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.813015907Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.813021596Z" level=info msg="runtime interface starting up..." Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.813025901Z" level=info msg="starting plugins..." Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.813055424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.814880805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 22 23:48:38.815991 containerd[1644]: time="2026-04-22T23:48:38.815446845Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 22 23:48:38.821733 containerd[1644]: time="2026-04-22T23:48:38.820827253Z" level=info msg="containerd successfully booted in 0.476655s" Apr 22 23:48:38.825886 systemd[1]: Started containerd.service - containerd container runtime. Apr 22 23:48:39.040786 tar[1640]: linux-amd64/README.md Apr 22 23:48:39.221175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 22 23:48:40.482213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:40.515052 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 22 23:48:40.532993 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:48:40.536037 systemd[1]: Startup finished in 8.738s (kernel) + 35.749s (initrd) + 18.774s (userspace) = 1min 3.262s. Apr 22 23:48:42.382151 kubelet[1734]: E0422 23:48:42.378822 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:48:42.443986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:48:42.444197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:48:42.451465 systemd[1]: kubelet.service: Consumed 2.109s CPU time, 257.5M memory peak. Apr 22 23:48:44.904101 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 22 23:48:44.905875 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:42182.service - OpenSSH per-connection server daemon (10.0.0.1:42182). Apr 22 23:48:46.460956 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 42182 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:48:46.497400 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:47.426350 systemd-logind[1614]: New session 1 of user core. Apr 22 23:48:47.458445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 22 23:48:47.612409 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 22 23:48:48.326057 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 22 23:48:48.340187 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 22 23:48:48.696807 (systemd)[1753]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:48.787356 systemd-logind[1614]: New session 2 of user core. Apr 22 23:48:51.458547 systemd[1753]: Queued start job for default target default.target. Apr 22 23:48:51.472162 systemd[1753]: Created slice app.slice - User Application Slice. Apr 22 23:48:51.475041 systemd[1753]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 22 23:48:51.475075 systemd[1753]: Reached target paths.target - Paths. Apr 22 23:48:51.476177 systemd[1753]: Reached target timers.target - Timers. Apr 22 23:48:51.581190 systemd[1753]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 22 23:48:51.594476 systemd[1753]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 22 23:48:51.689519 systemd[1753]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 22 23:48:51.692477 systemd[1753]: Reached target sockets.target - Sockets. Apr 22 23:48:51.898476 systemd[1753]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 22 23:48:51.909162 systemd[1753]: Reached target basic.target - Basic System. Apr 22 23:48:51.909366 systemd[1753]: Reached target default.target - Main User Target. Apr 22 23:48:51.913113 systemd[1753]: Startup finished in 2.703s. Apr 22 23:48:51.915361 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 22 23:48:52.097223 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 22 23:48:52.361436 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Apr 22 23:48:52.505500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 22 23:48:52.562319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:52.838783 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:48:52.876135 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:53.101303 systemd-logind[1614]: New session 3 of user core. Apr 22 23:48:53.121537 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 22 23:48:53.455502 sshd[1774]: Connection closed by 10.0.0.1 port 42126 Apr 22 23:48:53.453969 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:53.591294 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:42126.service: Deactivated successfully. Apr 22 23:48:53.698780 systemd[1]: session-3.scope: Deactivated successfully. Apr 22 23:48:53.708086 systemd-logind[1614]: Session 3 logged out. Waiting for processes to exit. Apr 22 23:48:53.728397 systemd-logind[1614]: Removed session 3. Apr 22 23:48:53.739130 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:42140.service - OpenSSH per-connection server daemon (10.0.0.1:42140). Apr 22 23:48:54.292784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:54.390301 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 42140 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:48:54.394329 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:54.403069 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:48:54.419802 systemd-logind[1614]: New session 4 of user core. Apr 22 23:48:54.427875 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 22 23:48:54.599242 sshd[1795]: Connection closed by 10.0.0.1 port 42140 Apr 22 23:48:54.604201 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:54.806194 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:42140.service: Deactivated successfully. Apr 22 23:48:54.870364 systemd[1]: session-4.scope: Deactivated successfully. Apr 22 23:48:54.897409 systemd-logind[1614]: Session 4 logged out. Waiting for processes to exit. Apr 22 23:48:54.921716 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:42154.service - OpenSSH per-connection server daemon (10.0.0.1:42154). Apr 22 23:48:54.925971 systemd-logind[1614]: Removed session 4. Apr 22 23:48:55.114743 kubelet[1788]: E0422 23:48:55.113993 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:48:55.126006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:48:55.126108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:48:55.128940 systemd[1]: kubelet.service: Consumed 1.628s CPU time, 111.4M memory peak. Apr 22 23:48:55.875368 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 42154 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:48:55.903371 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:56.122469 systemd-logind[1614]: New session 5 of user core. Apr 22 23:48:56.146354 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 22 23:48:56.213101 sshd[1807]: Connection closed by 10.0.0.1 port 42154 Apr 22 23:48:56.271106 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:56.299393 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:42154.service: Deactivated successfully. Apr 22 23:48:56.301477 systemd[1]: session-5.scope: Deactivated successfully. Apr 22 23:48:56.304074 systemd-logind[1614]: Session 5 logged out. Waiting for processes to exit. Apr 22 23:48:56.312011 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Apr 22 23:48:56.325287 systemd-logind[1614]: Removed session 5. Apr 22 23:48:56.913425 sshd[1813]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:48:56.993058 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:57.077513 systemd-logind[1614]: New session 6 of user core. Apr 22 23:48:57.144277 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 22 23:48:57.514294 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 22 23:48:57.517114 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 23:48:59.093030 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 22 23:48:59.254807 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 22 23:49:05.420516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 22 23:49:05.508993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:06.202122 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2297495797 wd_nsec: 2297495868 Apr 22 23:49:09.676262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:09.743132 dockerd[1840]: time="2026-04-22T23:49:09.739253790Z" level=info msg="Starting up" Apr 22 23:49:09.754193 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:09.773321 dockerd[1840]: time="2026-04-22T23:49:09.772448361Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 22 23:49:10.500283 dockerd[1840]: time="2026-04-22T23:49:10.499109698Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 22 23:49:10.674551 kubelet[1858]: E0422 23:49:10.669319 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:10.701271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:10.703363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:10.710266 systemd[1]: kubelet.service: Consumed 2.965s CPU time, 110.9M memory peak. Apr 22 23:49:12.139405 dockerd[1840]: time="2026-04-22T23:49:12.138256227Z" level=info msg="Loading containers: start." Apr 22 23:49:12.272553 kernel: Initializing XFRM netlink socket Apr 22 23:49:18.286726 systemd-networkd[1544]: docker0: Link UP Apr 22 23:49:18.362706 dockerd[1840]: time="2026-04-22T23:49:18.361551030Z" level=info msg="Loading containers: done." Apr 22 23:49:18.564809 dockerd[1840]: time="2026-04-22T23:49:18.563259903Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 22 23:49:18.566947 dockerd[1840]: time="2026-04-22T23:49:18.566018771Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 22 23:49:18.570516 dockerd[1840]: time="2026-04-22T23:49:18.569433528Z" level=info msg="Initializing buildkit" Apr 22 23:49:18.588891 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2548673407-merged.mount: Deactivated successfully. Apr 22 23:49:19.459276 dockerd[1840]: time="2026-04-22T23:49:19.458966024Z" level=info msg="Completed buildkit initialization" Apr 22 23:49:19.680446 dockerd[1840]: time="2026-04-22T23:49:19.679995158Z" level=info msg="Daemon has completed initialization" Apr 22 23:49:19.682183 dockerd[1840]: time="2026-04-22T23:49:19.680533405Z" level=info msg="API listen on /run/docker.sock" Apr 22 23:49:19.684292 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 22 23:49:20.898541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 22 23:49:20.939017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:23.289491 update_engine[1619]: I20260422 23:49:23.285090 1619 update_attempter.cc:509] Updating boot flags... Apr 22 23:49:23.864825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:23.934837 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:24.861643 kubelet[2096]: E0422 23:49:24.861437 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:24.888178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:24.888334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:24.890262 systemd[1]: kubelet.service: Consumed 2.395s CPU time, 110.7M memory peak. Apr 22 23:49:26.991101 containerd[1644]: time="2026-04-22T23:49:26.990787751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 22 23:49:30.180962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613086839.mount: Deactivated successfully. Apr 22 23:49:35.161377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 22 23:49:35.219146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:36.940808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:36.987437 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:37.999911 kubelet[2177]: E0422 23:49:37.998904 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:38.060709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:38.062440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:38.064235 systemd[1]: kubelet.service: Consumed 2.455s CPU time, 110.6M memory peak. Apr 22 23:49:44.366910 containerd[1644]: time="2026-04-22T23:49:44.366190645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:44.381728 containerd[1644]: time="2026-04-22T23:49:44.381321004Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27408248" Apr 22 23:49:44.409519 containerd[1644]: time="2026-04-22T23:49:44.407719402Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:44.576551 containerd[1644]: time="2026-04-22T23:49:44.575429594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:44.696315 containerd[1644]: time="2026-04-22T23:49:44.694548028Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 17.703582346s" Apr 22 23:49:44.696315 containerd[1644]: time="2026-04-22T23:49:44.695402053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 22 23:49:44.798768 containerd[1644]: time="2026-04-22T23:49:44.796273766Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 22 23:49:48.159459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 22 23:49:48.184396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:50.453537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:50.481393 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:51.655354 kubelet[2196]: E0422 23:49:51.653498 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:51.677760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:51.680180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:51.690094 systemd[1]: kubelet.service: Consumed 3.563s CPU time, 112.3M memory peak. Apr 22 23:50:01.349783 containerd[1644]: time="2026-04-22T23:50:01.347511774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:01.376416 containerd[1644]: time="2026-04-22T23:50:01.352346262Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21442870" Apr 22 23:50:01.423521 containerd[1644]: time="2026-04-22T23:50:01.410372798Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:01.533556 containerd[1644]: time="2026-04-22T23:50:01.532240907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:01.547108 containerd[1644]: time="2026-04-22T23:50:01.545376906Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 16.748948211s" Apr 22 23:50:01.547108 containerd[1644]: time="2026-04-22T23:50:01.545469683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 22 23:50:01.553891 containerd[1644]: time="2026-04-22T23:50:01.553409645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 22 23:50:01.926340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 22 23:50:02.070543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:04.694151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:04.980505 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:50:06.359939 kubelet[2218]: E0422 23:50:06.359050 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:50:06.376510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:50:06.376831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:50:06.388276 systemd[1]: kubelet.service: Consumed 2.682s CPU time, 110.5M memory peak. Apr 22 23:50:12.811835 containerd[1644]: time="2026-04-22T23:50:12.809932758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:12.818440 containerd[1644]: time="2026-04-22T23:50:12.815121983Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15546519" Apr 22 23:50:12.830260 containerd[1644]: time="2026-04-22T23:50:12.821298512Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:12.871263 containerd[1644]: time="2026-04-22T23:50:12.870326136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:13.021342 containerd[1644]: time="2026-04-22T23:50:13.020655906Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 11.467057716s" Apr 22 23:50:13.029318 containerd[1644]: time="2026-04-22T23:50:13.022913458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 22 23:50:13.055122 containerd[1644]: time="2026-04-22T23:50:13.053787186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 22 23:50:16.958050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 22 23:50:16.963737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:19.521446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:19.677891 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:50:20.329149 kubelet[2237]: E0422 23:50:20.327555 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:50:20.350225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:50:20.353433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:50:20.354790 systemd[1]: kubelet.service: Consumed 2.175s CPU time, 110.9M memory peak. Apr 22 23:50:30.815823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 22 23:50:30.867910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:31.767504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437986197.mount: Deactivated successfully. Apr 22 23:50:34.209751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:34.309251 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:50:35.357311 kubelet[2258]: E0422 23:50:35.356943 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:50:35.365268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:50:35.448386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:50:35.467949 systemd[1]: kubelet.service: Consumed 2.749s CPU time, 110.4M memory peak. Apr 22 23:50:40.380803 containerd[1644]: time="2026-04-22T23:50:40.380292759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:40.419167 containerd[1644]: time="2026-04-22T23:50:40.392545956Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25696203" Apr 22 23:50:40.436208 containerd[1644]: time="2026-04-22T23:50:40.434339102Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:40.680759 containerd[1644]: time="2026-04-22T23:50:40.675681752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:40.680759 containerd[1644]: time="2026-04-22T23:50:40.676632275Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 27.622667811s" Apr 22 23:50:40.680759 containerd[1644]: time="2026-04-22T23:50:40.676662142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 22 23:50:40.689609 containerd[1644]: time="2026-04-22T23:50:40.689509005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 22 23:50:44.451675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199569921.mount: Deactivated successfully. Apr 22 23:50:45.625982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 22 23:50:45.699173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:47.269458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:47.339007 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:50:47.762119 kubelet[2288]: E0422 23:50:47.761779 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:50:47.777189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:50:47.777345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:50:47.777943 systemd[1]: kubelet.service: Consumed 1.261s CPU time, 110.6M memory peak. Apr 22 23:50:56.278401 containerd[1644]: time="2026-04-22T23:50:56.275365745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:56.285206 containerd[1644]: time="2026-04-22T23:50:56.284396144Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23545226" Apr 22 23:50:56.290972 containerd[1644]: time="2026-04-22T23:50:56.290523067Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:56.391551 containerd[1644]: time="2026-04-22T23:50:56.390278770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:56.403443 containerd[1644]: time="2026-04-22T23:50:56.402292026Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 15.712635882s" Apr 22 23:50:56.403443 containerd[1644]: time="2026-04-22T23:50:56.402444366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 22 23:50:56.414013 containerd[1644]: time="2026-04-22T23:50:56.413834314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 22 23:50:57.967844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 22 23:50:58.021716 containerd[1644]: time="2026-04-22T23:50:58.020429630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:58.032324 containerd[1644]: time="2026-04-22T23:50:58.023653570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=316649" Apr 22 23:50:58.042406 containerd[1644]: time="2026-04-22T23:50:58.042290241Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:58.052273 containerd[1644]: time="2026-04-22T23:50:58.051535546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:50:58.053673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135897572.mount: Deactivated successfully. Apr 22 23:50:58.055673 containerd[1644]: time="2026-04-22T23:50:58.055627356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.641694393s" Apr 22 23:50:58.055847 containerd[1644]: time="2026-04-22T23:50:58.055828120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 22 23:50:58.082772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:58.083556 containerd[1644]: time="2026-04-22T23:50:58.082903351Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 22 23:51:00.467718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:00.644856 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:51:01.077827 kubelet[2352]: E0422 23:51:01.076644 2352 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:51:01.244960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:51:01.248823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:51:01.280746 systemd[1]: kubelet.service: Consumed 1.684s CPU time, 110.9M memory peak. Apr 22 23:51:01.404513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227256203.mount: Deactivated successfully. Apr 22 23:51:11.441936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 22 23:51:11.492537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:13.965108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:14.089043 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:51:14.686045 kubelet[2387]: E0422 23:51:14.685252 2387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:51:14.726465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:51:14.759049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:51:14.859078 systemd[1]: kubelet.service: Consumed 2.126s CPU time, 110.6M memory peak. Apr 22 23:51:21.507492 containerd[1644]: time="2026-04-22T23:51:21.504375216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:51:21.619673 containerd[1644]: time="2026-04-22T23:51:21.540925920Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23637607" Apr 22 23:51:21.619673 containerd[1644]: time="2026-04-22T23:51:21.557115307Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:51:21.801022 containerd[1644]: time="2026-04-22T23:51:21.788921947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:51:21.977695 containerd[1644]: time="2026-04-22T23:51:21.968947776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 23.825196365s" Apr 22 23:51:21.977695 containerd[1644]: time="2026-04-22T23:51:21.970363418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 22 23:51:24.980477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 22 23:51:25.249314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:28.849808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:28.904271 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:51:30.568258 kubelet[2445]: E0422 23:51:30.563020 2445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:51:30.707829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:51:30.731449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:51:30.762545 systemd[1]: kubelet.service: Consumed 2.765s CPU time, 111.4M memory peak. Apr 22 23:51:41.091408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 22 23:51:41.378981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:46.322210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:46.488336 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:51:47.466210 kubelet[2483]: E0422 23:51:47.463443 2483 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:51:47.563098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:51:47.571974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:51:47.716491 systemd[1]: kubelet.service: Consumed 2.594s CPU time, 110.9M memory peak. Apr 22 23:51:51.045475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:51.046201 systemd[1]: kubelet.service: Consumed 2.594s CPU time, 110.9M memory peak. Apr 22 23:51:51.294200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:52.265288 systemd[1]: Reload requested from client PID 2499 ('systemctl') (unit session-6.scope)... Apr 22 23:51:52.268617 systemd[1]: Reloading... Apr 22 23:51:58.648131 zram_generator::config[2548]: No configuration found. Apr 22 23:52:14.327917 systemd[1]: Reloading finished in 22045 ms. Apr 22 23:52:16.042516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:52:16.218377 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:52:16.344195 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:52:16.457494 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:52:16.534395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:52:16.534781 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 101.1M memory peak. Apr 22 23:52:16.959278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:52:21.689388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:52:21.725465 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:52:22.227992 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:52:22.875925 kubelet[2600]: I0422 23:52:22.875265 2600 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 22 23:52:22.875925 kubelet[2600]: I0422 23:52:22.875534 2600 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:52:22.875925 kubelet[2600]: I0422 23:52:22.875552 2600 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 22 23:52:22.875925 kubelet[2600]: I0422 23:52:22.875630 2600 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:52:22.939173 kubelet[2600]: I0422 23:52:22.883408 2600 server.go:951] "Client rotation is on, will bootstrap in background" Apr 22 23:52:23.116413 kubelet[2600]: E0422 23:52:23.116140 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:52:23.137959 kubelet[2600]: I0422 23:52:23.116638 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:52:23.314031 kubelet[2600]: I0422 23:52:23.311031 2600 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:52:23.626228 kubelet[2600]: I0422 23:52:23.625796 2600 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 22 23:52:23.635541 kubelet[2600]: I0422 23:52:23.635021 2600 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:52:23.636226 kubelet[2600]: I0422 23:52:23.635420 2600 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:52:23.636226 kubelet[2600]: I0422 23:52:23.635878 2600 topology_manager.go:143] "Creating topology manager with none policy" Apr 22 23:52:23.636226 kubelet[2600]: I0422 23:52:23.635886 2600 container_manager_linux.go:308] "Creating device plugin manager" Apr 22 23:52:23.636794 kubelet[2600]: I0422 23:52:23.636401 2600 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 22 23:52:23.645174 kubelet[2600]: I0422 23:52:23.644459 2600 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 22 23:52:23.650534 kubelet[2600]: I0422 23:52:23.649970 2600 kubelet.go:482] "Attempting to sync node with API server" Apr 22 23:52:23.667481 kubelet[2600]: I0422 23:52:23.651093 2600 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:52:23.667481 kubelet[2600]: I0422 23:52:23.655273 2600 kubelet.go:394] "Adding apiserver pod source" Apr 22 23:52:23.667481 kubelet[2600]: I0422 23:52:23.656276 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:52:23.707000 kubelet[2600]: I0422 23:52:23.704857 2600 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:52:23.769028 kubelet[2600]: I0422 23:52:23.768794 2600 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:52:23.769028 kubelet[2600]: I0422 23:52:23.768910 2600 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 22 23:52:23.777963 kubelet[2600]: W0422 23:52:23.769235 2600 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 22 23:52:23.838889 kubelet[2600]: I0422 23:52:23.838516 2600 server.go:1257] "Started kubelet" Apr 22 23:52:23.838889 kubelet[2600]: I0422 23:52:23.838782 2600 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:52:23.844631 kubelet[2600]: I0422 23:52:23.840919 2600 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:52:23.844631 kubelet[2600]: I0422 23:52:23.843066 2600 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:52:23.844631 kubelet[2600]: I0422 23:52:23.843242 2600 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 22 23:52:23.884473 kubelet[2600]: I0422 23:52:23.848275 2600 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:52:23.884473 kubelet[2600]: E0422 23:52:23.879075 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:52:23.907326 kubelet[2600]: I0422 23:52:23.887194 2600 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 22 23:52:23.907326 kubelet[2600]: I0422 23:52:23.889200 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:52:23.925837 kubelet[2600]: I0422 23:52:23.917106 2600 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 22 23:52:23.925837 kubelet[2600]: I0422 23:52:23.919012 2600 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 22 23:52:23.925837 kubelet[2600]: I0422 23:52:23.920141 2600 reconciler.go:29] "Reconciler: start to sync state" Apr 22 23:52:23.925837 kubelet[2600]: E0422 23:52:23.924403 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:23.925837 kubelet[2600]: E0422 23:52:23.925513 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Apr 22 23:52:23.941188 kubelet[2600]: I0422 23:52:23.940868 2600 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:52:23.941188 kubelet[2600]: I0422 23:52:23.941055 2600 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:52:23.943403 kubelet[2600]: E0422 23:52:23.942550 2600 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:52:23.960094 kubelet[2600]: I0422 23:52:23.959061 2600 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:52:25.736734 kubelet[2600]: E0422 23:52:25.736456 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:52:25.752434 kubelet[2600]: E0422 23:52:25.742345 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:25.752434 kubelet[2600]: E0422 23:52:25.742992 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Apr 22 23:52:25.867032 kubelet[2600]: E0422 23:52:25.860908 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:25.985190 kubelet[2600]: E0422 23:52:25.981140 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:25.996194 kubelet[2600]: I0422 23:52:25.995007 2600 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 22 23:52:26.044883 kubelet[2600]: I0422 23:52:26.043092 2600 cpu_manager.go:225] "Starting" policy="none" Apr 22 23:52:26.044883 kubelet[2600]: I0422 23:52:26.044165 2600 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 22 23:52:26.044883 kubelet[2600]: I0422 23:52:26.044884 2600 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 22 23:52:26.075792 kubelet[2600]: I0422 23:52:26.049768 2600 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 22 23:52:26.075792 kubelet[2600]: I0422 23:52:26.050264 2600 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 22 23:52:26.075792 kubelet[2600]: I0422 23:52:26.050352 2600 kubelet.go:2501] "Starting kubelet main sync loop" Apr 22 23:52:26.075792 kubelet[2600]: E0422 23:52:26.051334 2600 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:52:26.085009 kubelet[2600]: I0422 23:52:26.083808 2600 policy_none.go:50] "Start" Apr 22 23:52:26.085009 kubelet[2600]: I0422 23:52:26.084127 2600 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 22 23:52:26.085009 kubelet[2600]: I0422 23:52:26.085158 2600 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 22 23:52:26.102278 kubelet[2600]: E0422 23:52:26.087345 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.121848 kubelet[2600]: I0422 23:52:26.111149 2600 policy_none.go:44] "Start" Apr 22 23:52:26.152947 kubelet[2600]: E0422 23:52:26.152284 2600 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:52:26.152947 kubelet[2600]: E0422 23:52:26.152407 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Apr 22 23:52:26.201041 kubelet[2600]: E0422 23:52:26.192187 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.309487 kubelet[2600]: E0422 23:52:26.305059 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.356873 kubelet[2600]: E0422 23:52:26.354135 2600 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:52:26.388381 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 22 23:52:26.420985 kubelet[2600]: E0422 23:52:26.420535 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.540803 kubelet[2600]: E0422 23:52:26.538103 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.642287 kubelet[2600]: E0422 23:52:26.641544 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.751528 kubelet[2600]: E0422 23:52:26.750929 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.760088 kubelet[2600]: E0422 23:52:26.759362 2600 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:52:26.791209 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 22 23:52:26.856172 kubelet[2600]: E0422 23:52:26.855358 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.968629 kubelet[2600]: E0422 23:52:26.957456 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:26.968629 kubelet[2600]: E0422 23:52:26.963223 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Apr 22 23:52:27.064431 kubelet[2600]: E0422 23:52:27.062554 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:27.168622 kubelet[2600]: E0422 23:52:27.168371 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:27.188790 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 22 23:52:27.277901 kubelet[2600]: E0422 23:52:27.274258 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:27.377047 kubelet[2600]: E0422 23:52:27.376556 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:27.473988 kubelet[2600]: E0422 23:52:27.468946 2600 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:52:27.475132 kubelet[2600]: I0422 23:52:27.474673 2600 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 22 23:52:27.475132 kubelet[2600]: I0422 23:52:27.474711 2600 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:52:27.478953 kubelet[2600]: E0422 23:52:27.477102 2600 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:52:27.479958 kubelet[2600]: I0422 23:52:27.479427 2600 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 22 23:52:27.488653 kubelet[2600]: E0422 23:52:27.486266 2600 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:52:27.488653 kubelet[2600]: E0422 23:52:27.488198 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:52:27.679523 kubelet[2600]: I0422 23:52:27.678805 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:52:27.679523 kubelet[2600]: I0422 23:52:27.679508 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:52:27.679523 kubelet[2600]: I0422 23:52:27.679706 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:52:27.679523 kubelet[2600]: I0422 23:52:27.680332 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:27.704847 kubelet[2600]: E0422 23:52:27.689520 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:27.787054 kubelet[2600]: I0422 23:52:27.784525 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:52:27.795985 kubelet[2600]: I0422 23:52:27.787126 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:52:27.806352 kubelet[2600]: I0422 23:52:27.796269 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:52:27.806352 kubelet[2600]: I0422 23:52:27.796453 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:52:27.806352 kubelet[2600]: I0422 23:52:27.796536 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:52:27.970542 kubelet[2600]: I0422 23:52:27.969665 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:52:27.990537 kubelet[2600]: I0422 23:52:27.989208 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:27.993897 kubelet[2600]: E0422 23:52:27.992261 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:28.291049 systemd[1]: Created slice kubepods-burstable-pod0c67841a71302de5212118cd86fd71ba.slice - libcontainer container kubepods-burstable-pod0c67841a71302de5212118cd86fd71ba.slice. Apr 22 23:52:28.418339 kubelet[2600]: I0422 23:52:28.418039 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:28.421687 kubelet[2600]: E0422 23:52:28.421207 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:28.430390 kubelet[2600]: E0422 23:52:28.430160 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:28.446122 kubelet[2600]: E0422 23:52:28.444525 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:28.523138 containerd[1644]: time="2026-04-22T23:52:28.511440936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c67841a71302de5212118cd86fd71ba,Namespace:kube-system,Attempt:0,}" Apr 22 23:52:28.576727 kubelet[2600]: E0422 23:52:28.573792 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="3.2s" Apr 22 23:52:28.587046 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 22 23:52:28.730390 kubelet[2600]: E0422 23:52:28.729266 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:28.766885 kubelet[2600]: E0422 23:52:28.762202 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:28.790695 containerd[1644]: time="2026-04-22T23:52:28.788438405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 22 23:52:29.031748 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 22 23:52:29.242260 kubelet[2600]: E0422 23:52:29.238532 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:29.292755 kubelet[2600]: I0422 23:52:29.286201 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:29.304533 kubelet[2600]: E0422 23:52:29.303871 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:29.308254 kubelet[2600]: E0422 23:52:29.304474 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:29.374120 containerd[1644]: time="2026-04-22T23:52:29.372277259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 22 23:52:29.952903 kubelet[2600]: E0422 23:52:29.948356 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:52:29.962438 kubelet[2600]: E0422 23:52:29.954506 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:52:30.949536 containerd[1644]: time="2026-04-22T23:52:30.948913728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:52:30.962827 containerd[1644]: time="2026-04-22T23:52:30.962757150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 23:52:30.992852 containerd[1644]: time="2026-04-22T23:52:30.991245145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:52:31.041976 kubelet[2600]: I0422 23:52:31.011275 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:31.052213 kubelet[2600]: E0422 23:52:31.050181 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:31.054220 containerd[1644]: time="2026-04-22T23:52:31.053467285Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:52:31.055473 containerd[1644]: time="2026-04-22T23:52:31.055245631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 23:52:31.091272 containerd[1644]: time="2026-04-22T23:52:31.088235202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:52:31.161261 containerd[1644]: time="2026-04-22T23:52:31.158739979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 23:52:31.248794 containerd[1644]: time="2026-04-22T23:52:31.234993029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:52:31.274438 containerd[1644]: time="2026-04-22T23:52:31.273084454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.694355426s" Apr 22 23:52:31.395908 containerd[1644]: time="2026-04-22T23:52:31.392373369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.588981681s" Apr 22 23:52:31.413269 containerd[1644]: time="2026-04-22T23:52:31.412235565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.009808919s" Apr 22 23:52:31.417009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463021472.mount: Deactivated successfully. Apr 22 23:52:31.869188 kubelet[2600]: E0422 23:52:31.868481 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="6.4s" Apr 22 23:52:32.263905 containerd[1644]: time="2026-04-22T23:52:32.257535196Z" level=info msg="connecting to shim 1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:32.291499 containerd[1644]: time="2026-04-22T23:52:32.291269884Z" level=info msg="connecting to shim c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:32.299210 containerd[1644]: time="2026-04-22T23:52:32.298174031Z" level=info msg="connecting to shim ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22" address="unix:///run/containerd/s/d35460d04c7a65f745c2a7f60ab15985a784d7e07e95ba5b2ca4579b97f30e0a" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:34.308646 kubelet[2600]: I0422 23:52:34.308392 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:34.317016 kubelet[2600]: E0422 23:52:34.316932 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:36.599781 systemd[1]: Started cri-containerd-1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62.scope - libcontainer container 1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62. Apr 22 23:52:36.904398 systemd[1]: Started cri-containerd-c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f.scope - libcontainer container c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f. Apr 22 23:52:37.269973 systemd[1]: Started cri-containerd-ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22.scope - libcontainer container ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22. Apr 22 23:52:37.491097 kubelet[2600]: E0422 23:52:37.490877 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:52:38.254851 kubelet[2600]: E0422 23:52:38.252348 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:52:38.303427 kubelet[2600]: E0422 23:52:38.300010 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 22 23:52:39.604201 containerd[1644]: time="2026-04-22T23:52:39.598259515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\"" Apr 22 23:52:39.758908 kubelet[2600]: E0422 23:52:39.757511 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:40.004341 kubelet[2600]: E0422 23:52:39.977257 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:52:40.061260 containerd[1644]: time="2026-04-22T23:52:40.058152324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c67841a71302de5212118cd86fd71ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22\"" Apr 22 23:52:40.061260 containerd[1644]: time="2026-04-22T23:52:40.058548956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\"" Apr 22 23:52:40.089755 kubelet[2600]: E0422 23:52:40.088859 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:40.089755 kubelet[2600]: E0422 23:52:40.089847 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:40.143336 containerd[1644]: time="2026-04-22T23:52:40.132991026Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 22 23:52:40.345829 containerd[1644]: time="2026-04-22T23:52:40.303309278Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 22 23:52:40.435950 containerd[1644]: time="2026-04-22T23:52:40.434405289Z" level=info msg="CreateContainer within sandbox \"ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 22 23:52:40.667146 containerd[1644]: time="2026-04-22T23:52:40.667014903Z" level=info msg="Container c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:40.858986 containerd[1644]: time="2026-04-22T23:52:40.858178125Z" level=info msg="Container 220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:40.869394 kubelet[2600]: I0422 23:52:40.867785 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:40.903998 kubelet[2600]: E0422 23:52:40.871124 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:41.065229 containerd[1644]: time="2026-04-22T23:52:41.050758182Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\"" Apr 22 23:52:41.081205 containerd[1644]: time="2026-04-22T23:52:41.079761302Z" level=info msg="Container 18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:41.211355 containerd[1644]: time="2026-04-22T23:52:41.210190751Z" level=info msg="StartContainer for \"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\"" Apr 22 23:52:41.266695 containerd[1644]: time="2026-04-22T23:52:41.265993162Z" level=info msg="connecting to shim c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 22 23:52:41.355864 containerd[1644]: time="2026-04-22T23:52:41.355321915Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\"" Apr 22 23:52:41.373490 containerd[1644]: time="2026-04-22T23:52:41.372980147Z" level=info msg="StartContainer for \"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\"" Apr 22 23:52:41.450809 containerd[1644]: time="2026-04-22T23:52:41.448159158Z" level=info msg="CreateContainer within sandbox \"ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\"" Apr 22 23:52:41.571961 containerd[1644]: time="2026-04-22T23:52:41.570349194Z" level=info msg="StartContainer for \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\"" Apr 22 23:52:41.591155 containerd[1644]: time="2026-04-22T23:52:41.591053506Z" level=info msg="connecting to shim 220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" protocol=ttrpc version=3 Apr 22 23:52:41.711124 containerd[1644]: time="2026-04-22T23:52:41.709147789Z" level=info msg="connecting to shim 18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb" address="unix:///run/containerd/s/d35460d04c7a65f745c2a7f60ab15985a784d7e07e95ba5b2ca4579b97f30e0a" protocol=ttrpc version=3 Apr 22 23:52:43.986063 systemd[1]: Started cri-containerd-18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb.scope - libcontainer container 18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb. Apr 22 23:52:44.200302 systemd[1]: Started cri-containerd-220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6.scope - libcontainer container 220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6. Apr 22 23:52:44.448377 systemd[1]: Started cri-containerd-c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398.scope - libcontainer container c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398. Apr 22 23:52:45.334291 kubelet[2600]: E0422 23:52:45.333139 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 22 23:52:45.982101 containerd[1644]: time="2026-04-22T23:52:45.980985499Z" level=error msg="get state for 18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb" error="context deadline exceeded" Apr 22 23:52:45.997170 containerd[1644]: time="2026-04-22T23:52:45.990196111Z" level=warning msg="unknown status" status=0 Apr 22 23:52:46.202027 containerd[1644]: time="2026-04-22T23:52:46.200322406Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:52:47.198528 containerd[1644]: time="2026-04-22T23:52:47.190552019Z" level=info msg="StartContainer for \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" returns successfully" Apr 22 23:52:47.503768 kubelet[2600]: E0422 23:52:47.492686 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:52:48.227037 kubelet[2600]: I0422 23:52:48.225913 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:48.232084 kubelet[2600]: E0422 23:52:48.231806 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 22 23:52:48.650126 containerd[1644]: time="2026-04-22T23:52:48.648902558Z" level=info msg="StartContainer for \"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\" returns successfully" Apr 22 23:52:50.001499 kubelet[2600]: E0422 23:52:49.996372 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:50.111532 kubelet[2600]: E0422 23:52:50.072157 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:50.160349 containerd[1644]: time="2026-04-22T23:52:50.158194764Z" level=info msg="StartContainer for \"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\" returns successfully" Apr 22 23:52:50.262952 kubelet[2600]: E0422 23:52:50.101134 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:52:50.580087 kubelet[2600]: E0422 23:52:50.559782 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:50.643498 kubelet[2600]: E0422 23:52:50.582636 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:52.088522 kubelet[2600]: E0422 23:52:52.080874 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:52.140811 kubelet[2600]: E0422 23:52:52.140099 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:52.192817 kubelet[2600]: E0422 23:52:52.191389 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:52.203717 kubelet[2600]: E0422 23:52:52.203134 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:52.267979 kubelet[2600]: E0422 23:52:52.264705 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:52.281538 kubelet[2600]: E0422 23:52:52.281214 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:52.386532 kubelet[2600]: E0422 23:52:52.385794 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 22 23:52:53.569147 kubelet[2600]: E0422 23:52:53.568836 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:53.569147 kubelet[2600]: E0422 23:52:53.569448 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:53.591295 kubelet[2600]: E0422 23:52:53.569556 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:53.591295 kubelet[2600]: E0422 23:52:53.583156 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:55.263795 kubelet[2600]: I0422 23:52:55.263330 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:52:55.756881 kubelet[2600]: E0422 23:52:55.755326 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:55.781827 kubelet[2600]: E0422 23:52:55.778842 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:55.955054 kubelet[2600]: E0422 23:52:55.954659 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:52:55.960526 kubelet[2600]: E0422 23:52:55.959814 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:57.509383 kubelet[2600]: E0422 23:52:57.497502 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:00.264165 kubelet[2600]: E0422 23:53:00.261063 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:53:00.264165 kubelet[2600]: E0422 23:53:00.262858 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:05.295835 kubelet[2600]: E0422 23:53:05.292651 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:53:05.585970 kubelet[2600]: E0422 23:53:05.576236 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:53:05.585970 kubelet[2600]: E0422 23:53:05.582805 2600 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:53:06.442534 kubelet[2600]: E0422 23:53:06.441782 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:53:06.458113 kubelet[2600]: E0422 23:53:06.455143 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:07.535203 kubelet[2600]: E0422 23:53:07.530886 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:09.461010 kubelet[2600]: E0422 23:53:09.458685 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:53:10.293202 kubelet[2600]: E0422 23:53:10.284300 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:53:12.433981 kubelet[2600]: I0422 23:53:12.433221 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:53:17.551192 kubelet[2600]: E0422 23:53:17.549959 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:17.762801 kubelet[2600]: E0422 23:53:17.758780 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:53:17.872168 kubelet[2600]: E0422 23:53:17.778055 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:22.442191 kubelet[2600]: E0422 23:53:22.441283 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:53:26.469024 kubelet[2600]: E0422 23:53:26.468671 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 22 23:53:27.586730 kubelet[2600]: E0422 23:53:27.585196 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:29.537225 kubelet[2600]: I0422 23:53:29.535043 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:53:30.328194 kubelet[2600]: E0422 23:53:30.325965 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:53:37.609048 kubelet[2600]: E0422 23:53:37.606787 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:39.563422 kubelet[2600]: E0422 23:53:39.559727 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:53:43.526395 kubelet[2600]: E0422 23:53:43.525428 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 22 23:53:46.629886 kubelet[2600]: I0422 23:53:46.629167 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:53:47.609009 kubelet[2600]: E0422 23:53:47.607849 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:53:47.619172 kubelet[2600]: E0422 23:53:47.617237 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:53:50.353020 kubelet[2600]: E0422 23:53:50.352132 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:53:56.662322 kubelet[2600]: E0422 23:53:56.660742 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:53:57.628255 kubelet[2600]: E0422 23:53:57.626921 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:00.534933 kubelet[2600]: E0422 23:54:00.534466 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:54:03.846325 kubelet[2600]: I0422 23:54:03.845918 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:54:07.643397 kubelet[2600]: E0422 23:54:07.642816 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:08.128864 kubelet[2600]: E0422 23:54:08.127839 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:54:08.134730 kubelet[2600]: E0422 23:54:08.134368 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:10.371198 kubelet[2600]: E0422 23:54:10.364395 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:54:13.881143 kubelet[2600]: E0422 23:54:13.879501 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:54:15.286318 kubelet[2600]: E0422 23:54:15.285067 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:54:15.286318 kubelet[2600]: E0422 23:54:15.303143 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:17.566074 kubelet[2600]: E0422 23:54:17.565738 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:54:17.646247 kubelet[2600]: E0422 23:54:17.644266 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:19.656883 kubelet[2600]: E0422 23:54:19.655721 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:54:21.011118 kubelet[2600]: I0422 23:54:20.995159 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:54:25.171120 kubelet[2600]: E0422 23:54:25.170849 2600 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:54:25.239243 kubelet[2600]: E0422 23:54:25.172765 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:27.675825 kubelet[2600]: E0422 23:54:27.674148 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:30.476016 kubelet[2600]: E0422 23:54:30.475141 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:54:31.063076 kubelet[2600]: E0422 23:54:31.062037 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:54:34.576621 kubelet[2600]: E0422 23:54:34.575953 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 22 23:54:37.683962 kubelet[2600]: E0422 23:54:37.682147 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:38.152293 kubelet[2600]: I0422 23:54:38.150962 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:54:45.857377 systemd[1753]: Created slice background.slice - User Background Tasks Slice. Apr 22 23:54:45.896139 systemd[1753]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 22 23:54:46.349744 systemd[1753]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 22 23:54:47.705377 kubelet[2600]: E0422 23:54:47.701366 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:54:48.190274 kubelet[2600]: E0422 23:54:48.187723 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:54:50.545537 kubelet[2600]: E0422 23:54:50.544215 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:54:51.590119 kubelet[2600]: E0422 23:54:51.587025 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:54:51.701867 kubelet[2600]: E0422 23:54:51.690295 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:54:55.350753 kubelet[2600]: I0422 23:54:55.350232 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:54:57.799264 kubelet[2600]: E0422 23:54:57.793359 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:55:05.584107 kubelet[2600]: E0422 23:55:05.583014 2600 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:55:07.824095 kubelet[2600]: E0422 23:55:07.801484 2600 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:55:08.654209 kubelet[2600]: E0422 23:55:08.651268 2600 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 22 23:55:11.234945 kubelet[2600]: E0422 23:55:11.231271 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8d2e74b3086d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,LastTimestamp:2026-04-22 23:52:23.838410448 +0000 UTC m=+2.104956608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:55:11.372331 kubelet[2600]: I0422 23:55:11.284311 2600 apiserver.go:52] "Watching apiserver" Apr 22 23:55:11.782834 kubelet[2600]: I0422 23:55:11.782217 2600 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 22 23:55:11.893613 kubelet[2600]: E0422 23:55:11.892539 2600 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 22 23:55:11.974994 kubelet[2600]: E0422 23:55:11.972403 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8d2e751654071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:23.942529137 +0000 UTC m=+2.209075303,LastTimestamp:2026-04-22 23:52:23.942529137 +0000 UTC m=+2.209075303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:55:12.658886 kubelet[2600]: I0422 23:55:12.658522 2600 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:55:12.693880 kubelet[2600]: E0422 23:55:12.692843 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8d2e7cb250b97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:25.985141655 +0000 UTC m=+4.251687813,LastTimestamp:2026-04-22 23:52:25.985141655 +0000 UTC m=+4.251687813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:55:12.965957 kubelet[2600]: I0422 23:55:12.954065 2600 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 22 23:55:13.041121 kubelet[2600]: I0422 23:55:13.039486 2600 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 22 23:55:13.467156 kubelet[2600]: E0422 23:55:13.459340 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8d2e7cb2ea3ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:52:25.985770477 +0000 UTC m=+4.252316633,LastTimestamp:2026-04-22 23:52:25.985770477 +0000 UTC m=+4.252316633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:55:14.110175 kubelet[2600]: E0422 23:55:14.109284 2600 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 22 23:55:14.486915 kubelet[2600]: I0422 23:55:14.467386 2600 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 22 23:55:14.701215 kubelet[2600]: E0422 23:55:14.700918 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:15.234057 kubelet[2600]: E0422 23:55:15.233337 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:15.246260 kubelet[2600]: I0422 23:55:15.237933 2600 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:55:15.577285 kubelet[2600]: E0422 23:55:15.571978 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:15.992394 kubelet[2600]: E0422 23:55:15.985077 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:18.788554 kubelet[2600]: I0422 23:55:18.776398 2600 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.776383635 podStartE2EDuration="3.776383635s" podCreationTimestamp="2026-04-22 23:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:55:18.776336274 +0000 UTC m=+177.042882436" watchObservedRunningTime="2026-04-22 23:55:18.776383635 +0000 UTC m=+177.042929801" Apr 22 23:55:19.541338 kubelet[2600]: I0422 23:55:19.539709 2600 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.539545444 podStartE2EDuration="4.539545444s" podCreationTimestamp="2026-04-22 23:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:55:19.537705235 +0000 UTC m=+177.804251403" watchObservedRunningTime="2026-04-22 23:55:19.539545444 +0000 UTC m=+177.806091610" Apr 22 23:55:21.077274 kubelet[2600]: E0422 23:55:21.075776 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:26.307306 kubelet[2600]: E0422 23:55:26.306080 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:31.475160 kubelet[2600]: E0422 23:55:31.474915 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:36.559223 kubelet[2600]: E0422 23:55:36.558223 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:41.833368 kubelet[2600]: E0422 23:55:41.831840 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:46.996391 kubelet[2600]: E0422 23:55:46.964938 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:52.068248 kubelet[2600]: E0422 23:55:52.062897 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:57.199326 kubelet[2600]: E0422 23:55:57.194000 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:02.329915 kubelet[2600]: E0422 23:56:02.328986 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:07.456130 kubelet[2600]: E0422 23:56:07.452274 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:12.511702 kubelet[2600]: E0422 23:56:12.509646 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:17.600110 kubelet[2600]: E0422 23:56:17.598455 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:20.340742 systemd[1]: cri-containerd-c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398.scope: Deactivated successfully. Apr 22 23:56:20.344007 systemd[1]: cri-containerd-c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398.scope: Consumed 15.554s CPU time, 20M memory peak. Apr 22 23:56:20.732227 containerd[1644]: time="2026-04-22T23:56:20.731333824Z" level=info msg="received container exit event container_id:\"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\" id:\"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\" pid:2843 exit_status:1 exited_at:{seconds:1776902180 nanos:652107589}" Apr 22 23:56:22.824376 kubelet[2600]: E0422 23:56:22.823285 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:23.296139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398-rootfs.mount: Deactivated successfully. Apr 22 23:56:24.341113 kubelet[2600]: I0422 23:56:24.337270 2600 scope.go:122] "RemoveContainer" containerID="c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398" Apr 22 23:56:24.403244 kubelet[2600]: E0422 23:56:24.360366 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:24.540971 containerd[1644]: time="2026-04-22T23:56:24.539918792Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 22 23:56:25.080097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1279614060.mount: Deactivated successfully. Apr 22 23:56:25.198148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658646322.mount: Deactivated successfully. Apr 22 23:56:25.282990 containerd[1644]: time="2026-04-22T23:56:25.280005751Z" level=info msg="Container e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:56:25.850138 containerd[1644]: time="2026-04-22T23:56:25.849064232Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\"" Apr 22 23:56:26.010302 containerd[1644]: time="2026-04-22T23:56:25.948151434Z" level=info msg="StartContainer for \"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\"" Apr 22 23:56:26.118338 containerd[1644]: time="2026-04-22T23:56:26.112510694Z" level=info msg="connecting to shim e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 22 23:56:27.731959 kubelet[2600]: I0422 23:56:27.729154 2600 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=73.729123959 podStartE2EDuration="1m13.729123959s" podCreationTimestamp="2026-04-22 23:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:55:20.349117643 +0000 UTC m=+178.615663815" watchObservedRunningTime="2026-04-22 23:56:27.729123959 +0000 UTC m=+245.995670121" Apr 22 23:56:28.047550 systemd[1]: Started cri-containerd-e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351.scope - libcontainer container e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351. Apr 22 23:56:28.390099 kubelet[2600]: E0422 23:56:28.307185 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:31.476555 containerd[1644]: time="2026-04-22T23:56:31.476043848Z" level=info msg="StartContainer for \"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\" returns successfully" Apr 22 23:56:33.200333 kubelet[2600]: E0422 23:56:33.196378 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:33.610381 kubelet[2600]: E0422 23:56:33.590386 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:34.275276 kubelet[2600]: E0422 23:56:34.275009 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:35.127837 kubelet[2600]: E0422 23:56:35.126445 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:35.992465 kubelet[2600]: E0422 23:56:35.992066 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:38.692759 kubelet[2600]: E0422 23:56:38.689793 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:42.935073 kubelet[2600]: E0422 23:56:42.934302 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:43.816875 kubelet[2600]: E0422 23:56:43.816062 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:46.590202 kubelet[2600]: E0422 23:56:46.586059 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:48.993095 kubelet[2600]: E0422 23:56:48.990460 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:54.027032 kubelet[2600]: E0422 23:56:54.026301 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:59.165381 kubelet[2600]: E0422 23:56:59.161324 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:04.318224 kubelet[2600]: E0422 23:57:04.313168 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:08.564020 systemd[1]: cri-containerd-e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351.scope: Deactivated successfully. Apr 22 23:57:08.621794 systemd[1]: cri-containerd-e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351.scope: Consumed 7.259s CPU time, 18.2M memory peak. Apr 22 23:57:08.744132 containerd[1644]: time="2026-04-22T23:57:08.696097707Z" level=info msg="received container exit event container_id:\"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\" id:\"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\" pid:2953 exit_status:1 exited_at:{seconds:1776902228 nanos:561785693}" Apr 22 23:57:09.405243 kubelet[2600]: E0422 23:57:09.387437 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:11.388950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351-rootfs.mount: Deactivated successfully. Apr 22 23:57:13.352053 kubelet[2600]: I0422 23:57:13.351371 2600 scope.go:122] "RemoveContainer" containerID="c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398" Apr 22 23:57:13.481333 kubelet[2600]: I0422 23:57:13.403335 2600 scope.go:122] "RemoveContainer" containerID="e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351" Apr 22 23:57:13.481333 kubelet[2600]: E0422 23:57:13.403505 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:13.481333 kubelet[2600]: E0422 23:57:13.411223 2600 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 22 23:57:13.661490 containerd[1644]: time="2026-04-22T23:57:13.659581615Z" level=info msg="RemoveContainer for \"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\"" Apr 22 23:57:13.885880 containerd[1644]: time="2026-04-22T23:57:13.884216899Z" level=info msg="RemoveContainer for \"c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398\" returns successfully" Apr 22 23:57:14.475906 kubelet[2600]: E0422 23:57:14.473539 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:16.728054 kubelet[2600]: I0422 23:57:16.727117 2600 scope.go:122] "RemoveContainer" containerID="e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351" Apr 22 23:57:16.789869 kubelet[2600]: E0422 23:57:16.730314 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:16.789869 kubelet[2600]: E0422 23:57:16.736108 2600 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 22 23:57:19.153836 kubelet[2600]: I0422 23:57:19.138295 2600 scope.go:122] "RemoveContainer" containerID="e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351" Apr 22 23:57:19.153836 kubelet[2600]: E0422 23:57:19.152092 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:19.498630 containerd[1644]: time="2026-04-22T23:57:19.497133423Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 22 23:57:19.679098 kubelet[2600]: E0422 23:57:19.677242 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:20.245588 containerd[1644]: time="2026-04-22T23:57:20.238358711Z" level=info msg="Container c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:57:20.288120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570876099.mount: Deactivated successfully. Apr 22 23:57:20.935954 containerd[1644]: time="2026-04-22T23:57:20.934968523Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\"" Apr 22 23:57:21.069011 containerd[1644]: time="2026-04-22T23:57:20.981922684Z" level=info msg="StartContainer for \"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\"" Apr 22 23:57:21.210553 containerd[1644]: time="2026-04-22T23:57:21.185554249Z" level=info msg="connecting to shim c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 22 23:57:22.397306 systemd[1]: Started cri-containerd-c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8.scope - libcontainer container c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8. Apr 22 23:57:24.853696 kubelet[2600]: E0422 23:57:24.852738 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:25.303173 containerd[1644]: time="2026-04-22T23:57:25.289056701Z" level=info msg="StartContainer for \"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\" returns successfully" Apr 22 23:57:26.663186 kubelet[2600]: E0422 23:57:26.660077 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:29.986829 kubelet[2600]: E0422 23:57:29.903971 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:34.967991 kubelet[2600]: E0422 23:57:34.967247 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:35.966977 kubelet[2600]: E0422 23:57:35.961361 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:36.653538 kubelet[2600]: E0422 23:57:36.652542 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:36.764862 kubelet[2600]: E0422 23:57:36.763093 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:37.286991 kubelet[2600]: E0422 23:57:37.284122 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:39.610234 containerd[1644]: time="2026-04-22T23:57:39.605073147Z" level=info msg="container event discarded" container=1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62 type=CONTAINER_CREATED_EVENT Apr 22 23:57:39.781124 containerd[1644]: time="2026-04-22T23:57:39.647117835Z" level=info msg="container event discarded" container=1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62 type=CONTAINER_STARTED_EVENT Apr 22 23:57:40.064084 kubelet[2600]: E0422 23:57:40.050222 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:40.152159 containerd[1644]: time="2026-04-22T23:57:40.067834324Z" level=info msg="container event discarded" container=ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22 type=CONTAINER_CREATED_EVENT Apr 22 23:57:40.152159 containerd[1644]: time="2026-04-22T23:57:40.075288466Z" level=info msg="container event discarded" container=ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22 type=CONTAINER_STARTED_EVENT Apr 22 23:57:40.152159 containerd[1644]: time="2026-04-22T23:57:40.075598785Z" level=info msg="container event discarded" container=c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f type=CONTAINER_CREATED_EVENT Apr 22 23:57:40.152159 containerd[1644]: time="2026-04-22T23:57:40.075610008Z" level=info msg="container event discarded" container=c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f type=CONTAINER_STARTED_EVENT Apr 22 23:57:41.041486 containerd[1644]: time="2026-04-22T23:57:41.040354252Z" level=info msg="container event discarded" container=c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398 type=CONTAINER_CREATED_EVENT Apr 22 23:57:41.333259 containerd[1644]: time="2026-04-22T23:57:41.309268065Z" level=info msg="container event discarded" container=18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb type=CONTAINER_CREATED_EVENT Apr 22 23:57:41.333259 containerd[1644]: time="2026-04-22T23:57:41.310541963Z" level=info msg="container event discarded" container=220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6 type=CONTAINER_CREATED_EVENT Apr 22 23:57:45.138249 kubelet[2600]: E0422 23:57:45.133866 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:47.043996 containerd[1644]: time="2026-04-22T23:57:47.043136820Z" level=info msg="container event discarded" container=18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb type=CONTAINER_STARTED_EVENT Apr 22 23:57:47.992820 systemd[1]: Reload requested from client PID 3026 ('systemctl') (unit session-6.scope)... Apr 22 23:57:47.998159 systemd[1]: Reloading... Apr 22 23:57:48.303159 containerd[1644]: time="2026-04-22T23:57:48.247457464Z" level=info msg="container event discarded" container=c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398 type=CONTAINER_STARTED_EVENT Apr 22 23:57:49.651887 containerd[1644]: time="2026-04-22T23:57:49.651197551Z" level=info msg="container event discarded" container=220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6 type=CONTAINER_STARTED_EVENT Apr 22 23:57:50.346191 kubelet[2600]: E0422 23:57:50.345272 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:53.326277 kubelet[2600]: E0422 23:57:53.312345 2600 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Apr 22 23:57:53.791349 zram_generator::config[3076]: No configuration found. Apr 22 23:57:55.554395 kubelet[2600]: E0422 23:57:55.553991 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:00.783893 kubelet[2600]: E0422 23:58:00.783719 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:05.881941 kubelet[2600]: E0422 23:58:05.881284 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:10.986983 kubelet[2600]: E0422 23:58:10.982352 2600 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:11.070908 kubelet[2600]: E0422 23:58:11.064378 2600 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:11.986976 systemd[1]: Reloading finished in 23980 ms. Apr 22 23:58:13.428766 kubelet[2600]: I0422 23:58:13.426498 2600 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:58:13.439189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:58:13.553995 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:58:13.642148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:58:13.646915 systemd[1]: kubelet.service: Consumed 1min 57.247s CPU time, 139M memory peak. Apr 22 23:58:13.980940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:58:19.430260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:58:19.574941 (kubelet)[3117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:58:22.433898 kubelet[3117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:58:22.737207 kubelet[3117]: I0422 23:58:22.718937 3117 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 22 23:58:22.737207 kubelet[3117]: I0422 23:58:22.725411 3117 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:58:22.737207 kubelet[3117]: I0422 23:58:22.729155 3117 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 22 23:58:22.737207 kubelet[3117]: I0422 23:58:22.731399 3117 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:58:22.774165 kubelet[3117]: I0422 23:58:22.742901 3117 server.go:951] "Client rotation is on, will bootstrap in background" Apr 22 23:58:22.800828 kubelet[3117]: I0422 23:58:22.800024 3117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 22 23:58:23.280508 kubelet[3117]: I0422 23:58:23.276165 3117 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:58:23.959314 kubelet[3117]: I0422 23:58:23.958148 3117 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:58:24.356998 kubelet[3117]: I0422 23:58:24.355039 3117 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 22 23:58:24.376308 kubelet[3117]: I0422 23:58:24.373025 3117 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:58:24.404139 kubelet[3117]: I0422 23:58:24.379074 3117 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:58:24.404139 kubelet[3117]: I0422 23:58:24.400747 3117 topology_manager.go:143] "Creating topology manager with none policy" Apr 22 23:58:24.404139 kubelet[3117]: I0422 23:58:24.401200 3117 container_manager_linux.go:308] "Creating device plugin manager" Apr 22 23:58:24.404139 kubelet[3117]: I0422 23:58:24.403736 3117 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 22 23:58:24.460495 kubelet[3117]: I0422 23:58:24.433410 3117 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 22 23:58:24.499114 kubelet[3117]: I0422 23:58:24.476215 3117 kubelet.go:482] "Attempting to sync node with API server" Apr 22 23:58:24.499114 kubelet[3117]: I0422 23:58:24.486012 3117 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:58:24.499114 kubelet[3117]: I0422 23:58:24.486308 3117 kubelet.go:394] "Adding apiserver pod source" Apr 22 23:58:24.499114 kubelet[3117]: I0422 23:58:24.486323 3117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:58:24.803015 kubelet[3117]: I0422 23:58:24.799854 3117 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:58:25.146942 kubelet[3117]: I0422 23:58:25.141034 3117 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:58:25.237991 kubelet[3117]: I0422 23:58:25.162355 3117 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 22 23:58:25.635313 kubelet[3117]: I0422 23:58:25.594210 3117 server.go:1257] "Started kubelet" Apr 22 23:58:25.654219 kubelet[3117]: I0422 23:58:25.650960 3117 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:58:25.694856 kubelet[3117]: I0422 23:58:25.693067 3117 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:58:25.775426 kubelet[3117]: I0422 23:58:25.774519 3117 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 22 23:58:25.790430 kubelet[3117]: I0422 23:58:25.788859 3117 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:58:25.957813 kubelet[3117]: I0422 23:58:25.953967 3117 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:58:26.260779 kubelet[3117]: I0422 23:58:26.251824 3117 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 22 23:58:26.382892 kubelet[3117]: I0422 23:58:26.301515 3117 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:58:26.452025 kubelet[3117]: E0422 23:58:26.451799 3117 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:58:26.458938 kubelet[3117]: I0422 23:58:26.440532 3117 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 22 23:58:26.503876 kubelet[3117]: I0422 23:58:26.436239 3117 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 22 23:58:26.588299 kubelet[3117]: I0422 23:58:26.558132 3117 reconciler.go:29] "Reconciler: start to sync state" Apr 22 23:58:26.626700 kubelet[3117]: E0422 23:58:26.626125 3117 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:58:26.768095 kubelet[3117]: I0422 23:58:26.767213 3117 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:58:26.882059 kubelet[3117]: I0422 23:58:26.875009 3117 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:58:26.931011 kubelet[3117]: I0422 23:58:26.895042 3117 apiserver.go:52] "Watching apiserver" Apr 22 23:58:27.533900 kubelet[3117]: I0422 23:58:27.528145 3117 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:58:28.093090 kubelet[3117]: E0422 23:58:28.092292 3117 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:58:28.405104 kubelet[3117]: I0422 23:58:28.400780 3117 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 22 23:58:28.535181 kubelet[3117]: I0422 23:58:28.532300 3117 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 22 23:58:28.555500 kubelet[3117]: I0422 23:58:28.536869 3117 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 22 23:58:28.555500 kubelet[3117]: I0422 23:58:28.546377 3117 kubelet.go:2501] "Starting kubelet main sync loop" Apr 22 23:58:28.555500 kubelet[3117]: E0422 23:58:28.547433 3117 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:58:28.682488 kubelet[3117]: E0422 23:58:28.659822 3117 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:58:28.902143 kubelet[3117]: E0422 23:58:28.899284 3117 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:58:29.462374 kubelet[3117]: E0422 23:58:29.412929 3117 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:58:30.381018 kubelet[3117]: E0422 23:58:30.377836 3117 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:58:32.010347 kubelet[3117]: E0422 23:58:32.002060 3117 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:58:35.095081 kubelet[3117]: I0422 23:58:35.091012 3117 cpu_manager.go:225] "Starting" policy="none" Apr 22 23:58:35.095081 kubelet[3117]: I0422 23:58:35.098878 3117 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 22 23:58:35.208190 kubelet[3117]: I0422 23:58:35.107190 3117 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 22 23:58:35.220293 kubelet[3117]: E0422 23:58:35.208883 3117 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:58:35.252299 kubelet[3117]: I0422 23:58:35.248396 3117 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 22 23:58:35.252299 kubelet[3117]: I0422 23:58:35.249604 3117 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 22 23:58:35.252299 kubelet[3117]: I0422 23:58:35.250072 3117 policy_none.go:50] "Start" Apr 22 23:58:35.252299 kubelet[3117]: I0422 23:58:35.250125 3117 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 22 23:58:35.252299 kubelet[3117]: I0422 23:58:35.250151 3117 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 22 23:58:35.375310 kubelet[3117]: I0422 23:58:35.346002 3117 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 22 23:58:35.375310 kubelet[3117]: I0422 23:58:35.347475 3117 policy_none.go:44] "Start" Apr 22 23:58:36.460518 kubelet[3117]: E0422 23:58:36.407521 3117 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:58:36.530907 kubelet[3117]: I0422 23:58:36.500356 3117 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 22 23:58:36.530907 kubelet[3117]: I0422 23:58:36.500474 3117 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:58:36.624037 kubelet[3117]: I0422 23:58:36.621015 3117 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 22 23:58:37.198314 kubelet[3117]: E0422 23:58:37.176277 3117 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:58:38.832962 kubelet[3117]: I0422 23:58:38.789134 3117 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 22 23:58:40.242132 kubelet[3117]: I0422 23:58:40.240191 3117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:40.394000 kubelet[3117]: I0422 23:58:40.388284 3117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 22 23:58:40.529290 kubelet[3117]: I0422 23:58:40.522726 3117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 22 23:58:40.603987 kubelet[3117]: I0422 23:58:40.511539 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:40.667960 kubelet[3117]: I0422 23:58:40.662872 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:40.747467 kubelet[3117]: I0422 23:58:40.745543 3117 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 22 23:58:40.892279 kubelet[3117]: I0422 23:58:40.771063 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:41.080717 kubelet[3117]: I0422 23:58:41.055386 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:41.080717 kubelet[3117]: I0422 23:58:41.055548 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:58:41.240711 kubelet[3117]: I0422 23:58:41.230343 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:58:41.409081 kubelet[3117]: I0422 23:58:41.406856 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c67841a71302de5212118cd86fd71ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c67841a71302de5212118cd86fd71ba\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:58:41.575326 kubelet[3117]: I0422 23:58:41.570828 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:41.613188 kubelet[3117]: I0422 23:58:41.612931 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:58:42.258763 kubelet[3117]: E0422 23:58:42.255124 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.59s" Apr 22 23:58:43.742137 kubelet[3117]: I0422 23:58:43.741820 3117 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 22 23:58:43.750017 kubelet[3117]: I0422 23:58:43.749156 3117 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 22 23:58:45.781151 kubelet[3117]: E0422 23:58:45.778349 3117 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:58:46.083967 kubelet[3117]: E0422 23:58:46.082058 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:46.170643 kubelet[3117]: E0422 23:58:46.152190 3117 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 22 23:58:46.408250 kubelet[3117]: E0422 23:58:46.406433 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:46.839130 kubelet[3117]: E0422 23:58:46.827658 3117 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 22 23:58:46.944227 kubelet[3117]: E0422 23:58:46.849957 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:48.255313 kubelet[3117]: E0422 23:58:48.253154 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:48.549049 kubelet[3117]: E0422 23:58:48.544456 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.979s" Apr 22 23:58:49.088913 kubelet[3117]: E0422 23:58:49.083100 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:50.059753 kubelet[3117]: E0422 23:58:50.059320 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:51.946206 kubelet[3117]: E0422 23:58:51.943093 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.377s" Apr 22 23:58:52.278998 kubelet[3117]: E0422 23:58:52.257207 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:53.654045 kubelet[3117]: E0422 23:58:53.653407 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.077s" Apr 22 23:58:54.709016 kubelet[3117]: E0422 23:58:54.705829 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:55.792508 kubelet[3117]: E0422 23:58:55.789845 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.232s" Apr 22 23:58:59.851115 kubelet[3117]: E0422 23:58:59.845522 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.286s" Apr 22 23:59:01.925974 systemd[1]: cri-containerd-c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8.scope: Deactivated successfully. Apr 22 23:59:01.958879 systemd[1]: cri-containerd-c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8.scope: Consumed 18.582s CPU time, 22.7M memory peak. Apr 22 23:59:02.248938 containerd[1644]: time="2026-04-22T23:59:02.244051229Z" level=info msg="received container exit event container_id:\"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\" id:\"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\" pid:3001 exit_status:1 exited_at:{seconds:1776902342 nanos:191132984}" Apr 22 23:59:03.556116 systemd[1]: cri-containerd-220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6.scope: Deactivated successfully. Apr 22 23:59:03.643809 systemd[1]: cri-containerd-220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6.scope: Consumed 1min 16.930s CPU time, 23.7M memory peak. Apr 22 23:59:04.584827 containerd[1644]: time="2026-04-22T23:59:04.504224804Z" level=info msg="received container exit event container_id:\"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\" id:\"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\" pid:2835 exit_status:1 exited_at:{seconds:1776902344 nanos:345513171}" Apr 22 23:59:04.793031 kubelet[3117]: E0422 23:59:04.790060 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.178s" Apr 22 23:59:07.186319 kubelet[3117]: E0422 23:59:07.083529 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.136s" Apr 22 23:59:07.997293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8-rootfs.mount: Deactivated successfully. Apr 22 23:59:08.245071 kubelet[3117]: E0422 23:59:08.243027 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:09.601215 kubelet[3117]: E0422 23:59:09.598857 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.563s" Apr 22 23:59:12.456888 kubelet[3117]: E0422 23:59:12.408298 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.793s" Apr 22 23:59:12.644361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6-rootfs.mount: Deactivated successfully. Apr 22 23:59:13.703952 kubelet[3117]: E0422 23:59:13.699735 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.184s" Apr 22 23:59:13.745295 kubelet[3117]: I0422 23:59:13.724109 3117 scope.go:122] "RemoveContainer" containerID="c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8" Apr 22 23:59:13.775489 kubelet[3117]: I0422 23:59:13.772432 3117 scope.go:122] "RemoveContainer" containerID="e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351" Apr 22 23:59:13.873728 kubelet[3117]: E0422 23:59:13.868542 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:14.429679 containerd[1644]: time="2026-04-22T23:59:14.429169890Z" level=info msg="RemoveContainer for \"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\"" Apr 22 23:59:15.151197 containerd[1644]: time="2026-04-22T23:59:15.149146188Z" level=info msg="RemoveContainer for \"e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351\" returns successfully" Apr 22 23:59:15.780280 containerd[1644]: time="2026-04-22T23:59:15.779385138Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 22 23:59:16.303381 kubelet[3117]: I0422 23:59:16.302404 3117 scope.go:122] "RemoveContainer" containerID="220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6" Apr 22 23:59:16.389324 kubelet[3117]: E0422 23:59:16.389077 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:16.545737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437352362.mount: Deactivated successfully. Apr 22 23:59:16.570970 containerd[1644]: time="2026-04-22T23:59:16.566610460Z" level=info msg="Container 2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:59:18.055282 containerd[1644]: time="2026-04-22T23:59:18.052946287Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 22 23:59:18.495065 kubelet[3117]: E0422 23:59:18.483115 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.802s" Apr 22 23:59:19.346874 containerd[1644]: time="2026-04-22T23:59:19.346287133Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\"" Apr 22 23:59:19.801136 containerd[1644]: time="2026-04-22T23:59:19.765050578Z" level=info msg="StartContainer for \"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\"" Apr 22 23:59:19.801136 containerd[1644]: time="2026-04-22T23:59:19.797129225Z" level=info msg="connecting to shim 2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 22 23:59:20.148523 kubelet[3117]: E0422 23:59:20.147793 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.584s" Apr 22 23:59:20.802026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826262618.mount: Deactivated successfully. Apr 22 23:59:21.155458 containerd[1644]: time="2026-04-22T23:59:21.146087438Z" level=info msg="Container 2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:59:22.374972 kubelet[3117]: E0422 23:59:22.373456 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.765s" Apr 22 23:59:23.612953 containerd[1644]: time="2026-04-22T23:59:23.599994101Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\"" Apr 22 23:59:24.325032 containerd[1644]: time="2026-04-22T23:59:24.308482885Z" level=info msg="StartContainer for \"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\"" Apr 22 23:59:25.154968 containerd[1644]: time="2026-04-22T23:59:25.150515000Z" level=info msg="connecting to shim 2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" protocol=ttrpc version=3 Apr 22 23:59:26.300280 systemd[1]: Started cri-containerd-2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19.scope - libcontainer container 2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19. Apr 22 23:59:27.388974 kubelet[3117]: E0422 23:59:27.388492 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.82s" Apr 22 23:59:29.354357 kubelet[3117]: I0422 23:59:29.304416 3117 scope.go:122] "RemoveContainer" containerID="c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8" Apr 22 23:59:29.493127 systemd[1]: Started cri-containerd-2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a.scope - libcontainer container 2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a. Apr 22 23:59:32.233460 kubelet[3117]: E0422 23:59:32.063691 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.675s" Apr 22 23:59:32.676927 containerd[1644]: time="2026-04-22T23:59:32.668889036Z" level=info msg="RemoveContainer for \"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\"" Apr 22 23:59:34.839533 containerd[1644]: time="2026-04-22T23:59:34.835273156Z" level=info msg="RemoveContainer for \"c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8\" returns successfully" Apr 22 23:59:35.221459 kubelet[3117]: I0422 23:59:35.079452 3117 scope.go:122] "RemoveContainer" containerID="220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6" Apr 22 23:59:36.043135 kubelet[3117]: E0422 23:59:36.038960 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.689s" Apr 22 23:59:36.704382 containerd[1644]: time="2026-04-22T23:59:36.703907229Z" level=info msg="RemoveContainer for \"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\"" Apr 22 23:59:37.737954 containerd[1644]: time="2026-04-22T23:59:37.690111031Z" level=info msg="StartContainer for \"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\" returns successfully" Apr 22 23:59:38.945951 kubelet[3117]: E0422 23:59:38.938531 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.899s" Apr 22 23:59:39.389120 containerd[1644]: time="2026-04-22T23:59:39.387333590Z" level=info msg="RemoveContainer for \"220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6\" returns successfully" Apr 22 23:59:41.400909 containerd[1644]: time="2026-04-22T23:59:41.397925910Z" level=info msg="StartContainer for \"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\" returns successfully" Apr 22 23:59:41.667295 kubelet[3117]: E0422 23:59:41.660129 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.707s" Apr 22 23:59:42.911149 kubelet[3117]: E0422 23:59:42.907174 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 22 23:59:43.552355 kubelet[3117]: E0422 23:59:43.545104 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:43.960874 kubelet[3117]: E0422 23:59:43.935006 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.017s" Apr 22 23:59:45.298281 update_engine[1619]: I20260422 23:59:45.291524 1619 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 22 23:59:45.298281 update_engine[1619]: I20260422 23:59:45.298353 1619 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 22 23:59:45.408083 update_engine[1619]: I20260422 23:59:45.362016 1619 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 22 23:59:45.492381 update_engine[1619]: I20260422 23:59:45.486486 1619 omaha_request_params.cc:62] Current group set to beta Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.494864 1619 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.502844 1619 update_attempter.cc:643] Scheduling an action processor start. Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.525017 1619 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.541818 1619 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.547945 1619 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.548224 1619 omaha_request_action.cc:272] Request: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: Apr 22 23:59:45.582382 update_engine[1619]: I20260422 23:59:45.548241 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:59:46.095126 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 22 23:59:46.163505 update_engine[1619]: I20260422 23:59:45.762554 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:59:46.163505 update_engine[1619]: I20260422 23:59:45.996882 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:59:46.163505 update_engine[1619]: E20260422 23:59:46.009258 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:59:46.163505 update_engine[1619]: I20260422 23:59:46.010151 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 22 23:59:46.900206 kubelet[3117]: E0422 23:59:46.898441 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.31s" Apr 22 23:59:47.572912 kubelet[3117]: E0422 23:59:47.571146 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:47.906125 kubelet[3117]: E0422 23:59:47.895915 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:48.285267 kubelet[3117]: E0422 23:59:48.230872 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.255s" Apr 22 23:59:49.773016 kubelet[3117]: E0422 23:59:49.698262 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:49.773016 kubelet[3117]: E0422 23:59:49.764464 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:50.063879 kubelet[3117]: E0422 23:59:50.053080 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.503s" Apr 22 23:59:50.165639 kubelet[3117]: E0422 23:59:50.164193 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 22 23:59:51.663954 kubelet[3117]: E0422 23:59:51.663697 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Apr 22 23:59:53.868956 kubelet[3117]: E0422 23:59:53.863637 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.233s" Apr 22 23:59:55.804387 kubelet[3117]: E0422 23:59:55.724455 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.176s" Apr 22 23:59:56.284816 update_engine[1619]: I20260422 23:59:56.284199 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:59:56.284816 update_engine[1619]: I20260422 23:59:56.284792 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:59:56.506739 update_engine[1619]: I20260422 23:59:56.288661 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:59:56.506739 update_engine[1619]: E20260422 23:59:56.300192 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:59:56.506739 update_engine[1619]: I20260422 23:59:56.301330 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 22 23:59:58.266103 kubelet[3117]: E0422 23:59:58.181093 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.503s" Apr 22 23:59:59.184493 kubelet[3117]: E0422 23:59:59.183318 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:00.239154 kubelet[3117]: E0423 00:00:00.236016 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:00:01.680065 kubelet[3117]: E0423 00:00:01.679080 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.107s" Apr 23 00:00:01.943950 kubelet[3117]: E0423 00:00:01.939772 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:04.083339 kubelet[3117]: E0423 00:00:04.039414 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.464s" Apr 23 00:00:06.298130 update_engine[1619]: I20260423 00:00:06.286301 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 23 00:00:06.298130 update_engine[1619]: I20260423 00:00:06.297415 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 23 00:00:06.479368 update_engine[1619]: I20260423 00:00:06.311379 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 23 00:00:06.479368 update_engine[1619]: E20260423 00:00:06.328426 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 23 00:00:06.479368 update_engine[1619]: I20260423 00:00:06.331472 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 23 00:00:07.436512 kubelet[3117]: E0423 00:00:07.435844 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.886s" Apr 23 00:00:08.946150 kubelet[3117]: E0423 00:00:08.943830 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.507s" Apr 23 00:00:10.176281 kubelet[3117]: E0423 00:00:10.175803 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.13s" Apr 23 00:00:10.381306 kubelet[3117]: E0423 00:00:10.380062 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:00:12.450428 kubelet[3117]: E0423 00:00:12.442846 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.889s" Apr 23 00:00:12.855240 kubelet[3117]: E0423 00:00:12.851860 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:14.640094 kubelet[3117]: E0423 00:00:14.637977 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.988s" Apr 23 00:00:16.086449 kubelet[3117]: E0423 00:00:16.085484 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.33s" Apr 23 00:00:16.287909 update_engine[1619]: I20260423 00:00:16.282971 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 23 00:00:16.328932 update_engine[1619]: I20260423 00:00:16.288216 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 23 00:00:16.328932 update_engine[1619]: I20260423 00:00:16.302980 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 23 00:00:16.363159 update_engine[1619]: E20260423 00:00:16.361097 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.364245 1619 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.366953 1619 omaha_request_action.cc:617] Omaha request response: Apr 23 00:00:16.423461 update_engine[1619]: E20260423 00:00:16.373684 1619 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374069 1619 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374076 1619 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374080 1619 update_attempter.cc:306] Processing Done. Apr 23 00:00:16.423461 update_engine[1619]: E20260423 00:00:16.374095 1619 update_attempter.cc:619] Update failed. Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374099 1619 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374104 1619 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.374117 1619 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.383879 1619 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.385244 1619 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 23 00:00:16.423461 update_engine[1619]: I20260423 00:00:16.386890 1619 omaha_request_action.cc:272] Request: Apr 23 00:00:16.423461 update_engine[1619]: Apr 23 00:00:16.423461 update_engine[1619]: Apr 23 00:00:16.423461 update_engine[1619]: Apr 23 00:00:16.742066 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 23 00:00:16.742066 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 23 00:00:16.763546 update_engine[1619]: Apr 23 00:00:16.763546 update_engine[1619]: Apr 23 00:00:16.763546 update_engine[1619]: Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.390547 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.392280 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.409165 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 23 00:00:16.763546 update_engine[1619]: E20260423 00:00:16.428552 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435235 1619 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435320 1619 omaha_request_action.cc:617] Omaha request response: Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435333 1619 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435338 1619 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435344 1619 update_attempter.cc:306] Processing Done. Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435351 1619 update_attempter.cc:310] Error event sent. Apr 23 00:00:16.763546 update_engine[1619]: I20260423 00:00:16.435376 1619 update_check_scheduler.cc:74] Next update check in 47m27s Apr 23 00:00:18.569480 kubelet[3117]: E0423 00:00:18.565206 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.929s" Apr 23 00:00:20.596505 kubelet[3117]: E0423 00:00:20.585381 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:00:21.457315 kubelet[3117]: E0423 00:00:21.454796 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.719s" Apr 23 00:00:24.293858 kubelet[3117]: E0423 00:00:24.290089 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.73s" Apr 23 00:00:26.784808 kubelet[3117]: E0423 00:00:26.650521 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.036s" Apr 23 00:00:27.711863 kubelet[3117]: E0423 00:00:27.711364 3117 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 23 00:00:30.463039 kubelet[3117]: E0423 00:00:30.460089 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.675s" Apr 23 00:00:30.773946 kubelet[3117]: E0423 00:00:30.754246 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:00:30.773946 kubelet[3117]: I0423 00:00:30.754521 3117 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 23 00:00:31.403160 kubelet[3117]: E0423 00:00:31.343988 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:31.789284 kubelet[3117]: E0423 00:00:31.777794 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.196s" Apr 23 00:00:34.087662 kubelet[3117]: E0423 00:00:34.083108 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.53s" Apr 23 00:00:35.881199 kubelet[3117]: E0423 00:00:35.879529 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.289s" Apr 23 00:00:36.659038 kubelet[3117]: E0423 00:00:36.650152 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:37.707312 kubelet[3117]: E0423 00:00:37.706586 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 23 00:00:39.700944 kubelet[3117]: E0423 00:00:39.668381 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.065s" Apr 23 00:00:40.311102 kubelet[3117]: E0423 00:00:40.307164 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:41.886976 kubelet[3117]: E0423 00:00:41.873391 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:43.833758 kubelet[3117]: E0423 00:00:43.831282 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.279s" Apr 23 00:00:44.490295 kubelet[3117]: E0423 00:00:44.487997 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:46.689115 kubelet[3117]: E0423 00:00:46.686495 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.125s" Apr 23 00:00:47.213542 kubelet[3117]: E0423 00:00:47.212018 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:50.234549 kubelet[3117]: E0423 00:00:50.234236 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.471s" Apr 23 00:00:52.436014 kubelet[3117]: E0423 00:00:52.410144 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:53.347011 kubelet[3117]: E0423 00:00:53.346606 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:57.530010 kubelet[3117]: E0423 00:00:57.529410 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:02.606340 kubelet[3117]: E0423 00:01:02.595990 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:07.758064 kubelet[3117]: E0423 00:01:07.757337 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:11.373338 sudo[1818]: pam_unix(sudo:session): session closed for user root Apr 23 00:01:11.412655 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Apr 23 00:01:11.536440 sshd[1817]: Connection closed by 10.0.0.1 port 37452 Apr 23 00:01:11.580468 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:37452.service: Deactivated successfully. Apr 23 00:01:11.710504 systemd[1]: session-6.scope: Deactivated successfully. Apr 23 00:01:11.724175 systemd[1]: session-6.scope: Consumed 2min 28.163s CPU time, 239.9M memory peak. Apr 23 00:01:11.806980 systemd-logind[1614]: Session 6 logged out. Waiting for processes to exit. Apr 23 00:01:11.896738 systemd-logind[1614]: Removed session 6. Apr 23 00:01:12.101931 kubelet[3117]: E0423 00:01:12.080151 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:12.968853 kubelet[3117]: E0423 00:01:12.959180 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:18.184393 kubelet[3117]: E0423 00:01:18.183852 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:23.342094 kubelet[3117]: E0423 00:01:23.323093 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:23.479994 containerd[1644]: time="2026-04-23T00:01:23.407516626Z" level=info msg="container event discarded" container=c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398 type=CONTAINER_STOPPED_EVENT Apr 23 00:01:25.764870 containerd[1644]: time="2026-04-23T00:01:25.764335693Z" level=info msg="container event discarded" container=e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351 type=CONTAINER_CREATED_EVENT Apr 23 00:01:26.173030 kubelet[3117]: E0423 00:01:26.169923 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.596s" Apr 23 00:01:27.762042 kubelet[3117]: E0423 00:01:27.761154 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.202s" Apr 23 00:01:28.651121 kubelet[3117]: E0423 00:01:28.638206 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:31.257123 containerd[1644]: time="2026-04-23T00:01:31.254303574Z" level=info msg="container event discarded" container=e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351 type=CONTAINER_STARTED_EVENT Apr 23 00:01:33.841235 kubelet[3117]: E0423 00:01:33.828862 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:35.615915 kubelet[3117]: E0423 00:01:35.614472 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.063s" Apr 23 00:01:39.074451 kubelet[3117]: E0423 00:01:39.072655 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:44.194873 kubelet[3117]: E0423 00:01:44.175044 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:49.293527 kubelet[3117]: E0423 00:01:49.199538 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:53.593963 kubelet[3117]: E0423 00:01:53.593081 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:54.341132 kubelet[3117]: E0423 00:01:54.340166 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:59.158765 kubelet[3117]: E0423 00:01:59.113191 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:59.432187 kubelet[3117]: E0423 00:01:59.420137 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:01.702977 kubelet[3117]: E0423 00:02:01.694215 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.134s" Apr 23 00:02:04.547157 kubelet[3117]: E0423 00:02:04.513839 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:09.663489 kubelet[3117]: E0423 00:02:09.658666 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:12.023188 containerd[1644]: time="2026-04-23T00:02:12.022472278Z" level=info msg="container event discarded" container=e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351 type=CONTAINER_STOPPED_EVENT Apr 23 00:02:13.640259 kubelet[3117]: E0423 00:02:13.635922 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.051s" Apr 23 00:02:13.983524 containerd[1644]: time="2026-04-23T00:02:13.912284827Z" level=info msg="container event discarded" container=c87b061482afa662324645db676bf4034621328f3be9adb89707254dc3532398 type=CONTAINER_DELETED_EVENT Apr 23 00:02:14.889453 kubelet[3117]: E0423 00:02:14.886917 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:19.966977 kubelet[3117]: E0423 00:02:19.956504 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:20.922246 containerd[1644]: time="2026-04-23T00:02:20.920989025Z" level=info msg="container event discarded" container=c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8 type=CONTAINER_CREATED_EVENT Apr 23 00:02:23.586965 kubelet[3117]: E0423 00:02:23.586477 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 23 00:02:25.090440 containerd[1644]: time="2026-04-23T00:02:25.087194291Z" level=info msg="container event discarded" container=c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8 type=CONTAINER_STARTED_EVENT Apr 23 00:02:25.215163 kubelet[3117]: E0423 00:02:25.208440 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:25.791187 kubelet[3117]: E0423 00:02:25.788281 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:27.756066 kubelet[3117]: E0423 00:02:27.753909 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.193s" Apr 23 00:02:30.339794 kubelet[3117]: E0423 00:02:30.339018 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:35.484865 kubelet[3117]: E0423 00:02:35.484353 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:40.495504 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 23 00:02:40.952132 kubelet[3117]: E0423 00:02:40.950852 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:42.988052 systemd-tmpfiles[3309]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 23 00:02:42.988077 systemd-tmpfiles[3309]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 23 00:02:43.039724 kubelet[3117]: E0423 00:02:42.985402 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.381s" Apr 23 00:02:43.018105 systemd-tmpfiles[3309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 23 00:02:43.068288 systemd-tmpfiles[3309]: ACLs are not supported, ignoring. Apr 23 00:02:43.090140 systemd-tmpfiles[3309]: ACLs are not supported, ignoring. Apr 23 00:02:43.285159 systemd-tmpfiles[3309]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 00:02:43.285171 systemd-tmpfiles[3309]: Skipping /boot Apr 23 00:02:43.436402 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 23 00:02:43.454715 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 23 00:02:43.564522 systemd[1]: systemd-tmpfiles-clean.service: Consumed 1.248s CPU time, 4.3M memory peak. Apr 23 00:02:46.195833 kubelet[3117]: E0423 00:02:46.195383 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:50.166169 kubelet[3117]: I0423 00:02:50.164756 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-proxy\") pod \"kube-proxy-g8p8m\" (UID: \"5725ed46-6e23-4bb8-a456-a97c11a4218c\") " pod="kube-system/kube-proxy-g8p8m" Apr 23 00:02:50.272813 kubelet[3117]: I0423 00:02:50.272006 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5725ed46-6e23-4bb8-a456-a97c11a4218c-xtables-lock\") pod \"kube-proxy-g8p8m\" (UID: \"5725ed46-6e23-4bb8-a456-a97c11a4218c\") " pod="kube-system/kube-proxy-g8p8m" Apr 23 00:02:50.272813 kubelet[3117]: I0423 00:02:50.272361 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5725ed46-6e23-4bb8-a456-a97c11a4218c-lib-modules\") pod \"kube-proxy-g8p8m\" (UID: \"5725ed46-6e23-4bb8-a456-a97c11a4218c\") " pod="kube-system/kube-proxy-g8p8m" Apr 23 00:02:50.272813 kubelet[3117]: I0423 00:02:50.272441 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dlf\" (UniqueName: \"kubernetes.io/projected/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-api-access-j2dlf\") pod \"kube-proxy-g8p8m\" (UID: \"5725ed46-6e23-4bb8-a456-a97c11a4218c\") " pod="kube-system/kube-proxy-g8p8m" Apr 23 00:02:50.386901 kubelet[3117]: I0423 00:02:50.385243 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-run\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.386901 kubelet[3117]: I0423 00:02:50.385397 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-cni-plugin\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.386901 kubelet[3117]: I0423 00:02:50.385445 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-flannel-cfg\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.386901 kubelet[3117]: I0423 00:02:50.385458 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpsq\" (UniqueName: \"kubernetes.io/projected/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-kube-api-access-jzpsq\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.386901 kubelet[3117]: I0423 00:02:50.385507 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-xtables-lock\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.403130 kubelet[3117]: I0423 00:02:50.385528 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-cni\") pod \"kube-flannel-ds-pkm95\" (UID: \"ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff\") " pod="kube-flannel/kube-flannel-ds-pkm95" Apr 23 00:02:50.396753 systemd[1]: Created slice kubepods-besteffort-pod5725ed46_6e23_4bb8_a456_a97c11a4218c.slice - libcontainer container kubepods-besteffort-pod5725ed46_6e23_4bb8_a456_a97c11a4218c.slice. Apr 23 00:02:50.707887 kubelet[3117]: E0423 00:02:50.699554 3117 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-g8p8m\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="5725ed46-6e23-4bb8-a456-a97c11a4218c" pod="kube-system/kube-proxy-g8p8m" Apr 23 00:02:50.780162 systemd[1]: Created slice kubepods-burstable-podca6186d7_59a1_4c82_9c5d_6ca6d9c6deff.slice - libcontainer container kubepods-burstable-podca6186d7_59a1_4c82_9c5d_6ca6d9c6deff.slice. Apr 23 00:02:51.309075 kubelet[3117]: E0423 00:02:51.307495 3117 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:51.358046 kubelet[3117]: E0423 00:02:51.345874 3117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-proxy podName:5725ed46-6e23-4bb8-a456-a97c11a4218c nodeName:}" failed. No retries permitted until 2026-04-23 00:02:51.84218506 +0000 UTC m=+272.040017619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-proxy") pod "kube-proxy-g8p8m" (UID: "5725ed46-6e23-4bb8-a456-a97c11a4218c") : failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:51.371441 kubelet[3117]: E0423 00:02:51.359698 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:51.508979 kubelet[3117]: E0423 00:02:51.506294 3117 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:51.588198 kubelet[3117]: E0423 00:02:51.580026 3117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-flannel-cfg podName:ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff nodeName:}" failed. No retries permitted until 2026-04-23 00:02:52.011449685 +0000 UTC m=+272.209282238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-flannel-cfg") pod "kube-flannel-ds-pkm95" (UID: "ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff") : failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:53.018742 kubelet[3117]: E0423 00:02:53.008449 3117 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:53.057211 kubelet[3117]: E0423 00:02:53.056281 3117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-proxy podName:5725ed46-6e23-4bb8-a456-a97c11a4218c nodeName:}" failed. No retries permitted until 2026-04-23 00:02:54.02254111 +0000 UTC m=+274.220373667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5725ed46-6e23-4bb8-a456-a97c11a4218c-kube-proxy") pod "kube-proxy-g8p8m" (UID: "5725ed46-6e23-4bb8-a456-a97c11a4218c") : failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:53.800148 kubelet[3117]: E0423 00:02:53.799454 3117 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:53.847272 kubelet[3117]: E0423 00:02:53.804997 3117 projected.go:196] Error preparing data for projected volume kube-api-access-jzpsq for pod kube-flannel/kube-flannel-ds-pkm95: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:53.864668 kubelet[3117]: E0423 00:02:53.864000 3117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-kube-api-access-jzpsq podName:ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff nodeName:}" failed. No retries permitted until 2026-04-23 00:02:54.363863123 +0000 UTC m=+274.561695681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzpsq" (UniqueName: "kubernetes.io/projected/ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff-kube-api-access-jzpsq") pod "kube-flannel-ds-pkm95" (UID: "ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff") : failed to sync configmap cache: timed out waiting for the condition Apr 23 00:02:55.311883 kubelet[3117]: E0423 00:02:55.309353 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:55.529985 containerd[1644]: time="2026-04-23T00:02:55.528089117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8p8m,Uid:5725ed46-6e23-4bb8-a456-a97c11a4218c,Namespace:kube-system,Attempt:0,}" Apr 23 00:02:55.610771 kubelet[3117]: I0423 00:02:55.587435 3117 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 23 00:02:55.680553 kubelet[3117]: E0423 00:02:55.658283 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:55.702862 kubelet[3117]: E0423 00:02:55.702392 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 23 00:02:55.766209 containerd[1644]: time="2026-04-23T00:02:55.765650415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pkm95,Uid:ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff,Namespace:kube-flannel,Attempt:0,}" Apr 23 00:02:55.886030 containerd[1644]: time="2026-04-23T00:02:55.874117996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 23 00:02:55.892528 kubelet[3117]: I0423 00:02:55.892191 3117 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 23 00:02:56.954080 kubelet[3117]: E0423 00:02:56.951230 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:57.381684 containerd[1644]: time="2026-04-23T00:02:57.381006695Z" level=info msg="connecting to shim bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4" address="unix:///run/containerd/s/adf273cdccb3d42617fd5a3b6af093b4b306add900c8ea5d70e1e18a2127759e" namespace=k8s.io protocol=ttrpc version=3 Apr 23 00:02:57.437298 containerd[1644]: time="2026-04-23T00:02:57.396074238Z" level=info msg="connecting to shim 2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805" address="unix:///run/containerd/s/0d8b57edcd286969aa395915d1e096cb8a3a2f95eb77a2e7c4704af9a390361d" namespace=k8s.io protocol=ttrpc version=3 Apr 23 00:02:58.433238 kubelet[3117]: E0423 00:02:58.413458 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.859s" Apr 23 00:02:59.666159 kubelet[3117]: E0423 00:02:59.664094 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Apr 23 00:02:59.793130 systemd[1]: Started cri-containerd-2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805.scope - libcontainer container 2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805. Apr 23 00:03:00.664757 systemd[1]: Started cri-containerd-bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4.scope - libcontainer container bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4. Apr 23 00:03:00.755073 kubelet[3117]: E0423 00:03:00.751902 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:01.601739 systemd[1]: cri-containerd-2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19.scope: Deactivated successfully. Apr 23 00:03:01.603241 systemd[1]: cri-containerd-2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19.scope: Consumed 59.558s CPU time, 45.9M memory peak. Apr 23 00:03:01.779091 containerd[1644]: time="2026-04-23T00:03:01.775881700Z" level=info msg="received container exit event container_id:\"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\" id:\"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\" pid:3222 exit_status:1 exited_at:{seconds:1776902581 nanos:750765511}" Apr 23 00:03:02.069296 kubelet[3117]: E0423 00:03:02.050085 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:02.767894 containerd[1644]: time="2026-04-23T00:03:02.765482655Z" level=error msg="get state for 2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805" error="context deadline exceeded" Apr 23 00:03:02.767894 containerd[1644]: time="2026-04-23T00:03:02.766927772Z" level=warning msg="unknown status" status=0 Apr 23 00:03:02.895841 containerd[1644]: time="2026-04-23T00:03:02.895265544Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:03:05.281061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19-rootfs.mount: Deactivated successfully. Apr 23 00:03:05.613072 containerd[1644]: time="2026-04-23T00:03:05.612264939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pkm95,Uid:ca6186d7-59a1-4c82-9c5d-6ca6d9c6deff,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\"" Apr 23 00:03:05.746687 kubelet[3117]: E0423 00:03:05.746255 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:06.106693 kubelet[3117]: E0423 00:03:06.094290 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.513s" Apr 23 00:03:06.260828 containerd[1644]: time="2026-04-23T00:03:06.260405725Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 23 00:03:07.101870 kubelet[3117]: E0423 00:03:07.098399 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:07.643766 kubelet[3117]: I0423 00:03:07.643411 3117 scope.go:122] "RemoveContainer" containerID="2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19" Apr 23 00:03:07.667833 kubelet[3117]: E0423 00:03:07.656510 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:08.311206 kubelet[3117]: E0423 00:03:08.310815 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:03:08.443176 containerd[1644]: time="2026-04-23T00:03:08.442409681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8p8m,Uid:5725ed46-6e23-4bb8-a456-a97c11a4218c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4\"" Apr 23 00:03:08.539107 kubelet[3117]: E0423 00:03:08.537808 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:09.575852 containerd[1644]: time="2026-04-23T00:03:09.573024776Z" level=info msg="CreateContainer within sandbox \"bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 23 00:03:09.653262 kubelet[3117]: E0423 00:03:09.607931 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 23 00:03:10.202238 containerd[1644]: time="2026-04-23T00:03:10.199199441Z" level=info msg="Container 1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:03:10.376245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836023733.mount: Deactivated successfully. Apr 23 00:03:10.473238 containerd[1644]: time="2026-04-23T00:03:10.466788968Z" level=info msg="CreateContainer within sandbox \"bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3\"" Apr 23 00:03:10.586079 containerd[1644]: time="2026-04-23T00:03:10.585196975Z" level=info msg="StartContainer for \"1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3\"" Apr 23 00:03:10.650782 containerd[1644]: time="2026-04-23T00:03:10.650315674Z" level=info msg="connecting to shim 1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3" address="unix:///run/containerd/s/adf273cdccb3d42617fd5a3b6af093b4b306add900c8ea5d70e1e18a2127759e" protocol=ttrpc version=3 Apr 23 00:03:11.877517 systemd[1]: Started cri-containerd-1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3.scope - libcontainer container 1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3. Apr 23 00:03:12.194944 kubelet[3117]: E0423 00:03:12.190041 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:13.585171 kubelet[3117]: I0423 00:03:13.583180 3117 scope.go:122] "RemoveContainer" containerID="2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19" Apr 23 00:03:13.616472 kubelet[3117]: E0423 00:03:13.616056 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:14.134805 containerd[1644]: time="2026-04-23T00:03:14.128888835Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 23 00:03:14.573219 containerd[1644]: time="2026-04-23T00:03:14.558954050Z" level=info msg="StartContainer for \"1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3\" returns successfully" Apr 23 00:03:14.742284 containerd[1644]: time="2026-04-23T00:03:14.742160251Z" level=info msg="Container 304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:03:15.483863 containerd[1644]: time="2026-04-23T00:03:15.483482739Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\"" Apr 23 00:03:15.850947 containerd[1644]: time="2026-04-23T00:03:15.782137683Z" level=info msg="StartContainer for \"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\"" Apr 23 00:03:16.049846 containerd[1644]: time="2026-04-23T00:03:16.049084340Z" level=info msg="connecting to shim 304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 23 00:03:16.741692 kubelet[3117]: E0423 00:03:16.741048 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.185s" Apr 23 00:03:16.744785 kubelet[3117]: E0423 00:03:16.742157 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:17.600109 kubelet[3117]: E0423 00:03:17.597959 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:18.276839 kubelet[3117]: E0423 00:03:18.263305 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.46s" Apr 23 00:03:18.663161 systemd[1]: Started cri-containerd-304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f.scope - libcontainer container 304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f. Apr 23 00:03:20.665541 kubelet[3117]: E0423 00:03:20.664275 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Apr 23 00:03:21.294423 kubelet[3117]: E0423 00:03:21.293810 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:22.817917 kubelet[3117]: E0423 00:03:22.816313 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:22.856088 containerd[1644]: time="2026-04-23T00:03:22.855298732Z" level=info msg="StartContainer for \"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\" returns successfully" Apr 23 00:03:23.360837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046163767.mount: Deactivated successfully. Apr 23 00:03:24.306832 kubelet[3117]: E0423 00:03:24.297548 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:24.772216 kubelet[3117]: E0423 00:03:24.762083 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:25.448026 kubelet[3117]: E0423 00:03:25.447649 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:25.938835 containerd[1644]: time="2026-04-23T00:03:25.936750226Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:03:25.957057 containerd[1644]: time="2026-04-23T00:03:25.942165538Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=3642283" Apr 23 00:03:26.127133 containerd[1644]: time="2026-04-23T00:03:26.126555326Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:03:26.339554 containerd[1644]: time="2026-04-23T00:03:26.329994462Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:03:26.499069 containerd[1644]: time="2026-04-23T00:03:26.498123861Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 20.237534768s" Apr 23 00:03:26.499069 containerd[1644]: time="2026-04-23T00:03:26.498225461Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 23 00:03:26.782036 kubelet[3117]: E0423 00:03:26.779219 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:27.340084 containerd[1644]: time="2026-04-23T00:03:27.312115427Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 23 00:03:28.235935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1600952952.mount: Deactivated successfully. Apr 23 00:03:28.294170 kubelet[3117]: I0423 00:03:28.190413 3117 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g8p8m" podStartSLOduration=39.19021223 podStartE2EDuration="39.19021223s" podCreationTimestamp="2026-04-23 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 00:03:18.88622166 +0000 UTC m=+299.084054328" watchObservedRunningTime="2026-04-23 00:03:28.19021223 +0000 UTC m=+308.388044782" Apr 23 00:03:28.454837 kubelet[3117]: E0423 00:03:28.454482 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:28.484067 containerd[1644]: time="2026-04-23T00:03:28.472508604Z" level=info msg="Container 7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:03:29.145977 containerd[1644]: time="2026-04-23T00:03:29.145741960Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638\"" Apr 23 00:03:29.277219 containerd[1644]: time="2026-04-23T00:03:29.276306834Z" level=info msg="StartContainer for \"7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638\"" Apr 23 00:03:29.381265 containerd[1644]: time="2026-04-23T00:03:29.379731782Z" level=info msg="connecting to shim 7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638" address="unix:///run/containerd/s/0d8b57edcd286969aa395915d1e096cb8a3a2f95eb77a2e7c4704af9a390361d" protocol=ttrpc version=3 Apr 23 00:03:30.402491 kubelet[3117]: E0423 00:03:30.398123 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.754s" Apr 23 00:03:30.402511 systemd[1]: Started cri-containerd-7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638.scope - libcontainer container 7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638. Apr 23 00:03:31.821782 systemd[1]: cri-containerd-7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638.scope: Deactivated successfully. Apr 23 00:03:31.992133 containerd[1644]: time="2026-04-23T00:03:31.986169626Z" level=info msg="received container exit event container_id:\"7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638\" id:\"7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638\" pid:3511 exited_at:{seconds:1776902611 nanos:898447630}" Apr 23 00:03:32.082911 containerd[1644]: time="2026-04-23T00:03:32.078258830Z" level=info msg="StartContainer for \"7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638\" returns successfully" Apr 23 00:03:33.150270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638-rootfs.mount: Deactivated successfully. Apr 23 00:03:33.631843 kubelet[3117]: E0423 00:03:33.630299 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:34.764896 kubelet[3117]: E0423 00:03:34.763976 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:35.465757 containerd[1644]: time="2026-04-23T00:03:35.463022929Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 23 00:03:35.742740 kubelet[3117]: E0423 00:03:35.733319 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.176s" Apr 23 00:03:39.314825 kubelet[3117]: E0423 00:03:39.314143 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:39.887202 kubelet[3117]: E0423 00:03:39.871512 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.234s" Apr 23 00:03:41.322108 kubelet[3117]: E0423 00:03:41.320861 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.437s" Apr 23 00:03:42.198685 kubelet[3117]: E0423 00:03:42.197159 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:44.793176 kubelet[3117]: E0423 00:03:44.792257 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.137s" Apr 23 00:03:44.999649 kubelet[3117]: E0423 00:03:44.891108 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:46.211229 kubelet[3117]: E0423 00:03:46.183507 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.325s" Apr 23 00:03:49.292305 kubelet[3117]: E0423 00:03:49.290168 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.567s" Apr 23 00:03:50.304948 kubelet[3117]: E0423 00:03:50.268084 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:50.689740 kubelet[3117]: E0423 00:03:50.687896 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.32s" Apr 23 00:03:54.725948 kubelet[3117]: E0423 00:03:54.725325 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.009s" Apr 23 00:03:55.534420 kubelet[3117]: E0423 00:03:55.532958 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:59.668979 kubelet[3117]: E0423 00:03:59.668473 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.107s" Apr 23 00:04:00.776069 kubelet[3117]: E0423 00:04:00.754176 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:02.677325 kubelet[3117]: E0423 00:04:02.677022 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.126s" Apr 23 00:04:06.184241 kubelet[3117]: E0423 00:04:06.165256 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:06.729999 kubelet[3117]: E0423 00:04:06.728955 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:09.010157 containerd[1644]: time="2026-04-23T00:04:09.004260933Z" level=info msg="container event discarded" container=c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8 type=CONTAINER_STOPPED_EVENT Apr 23 00:04:11.258495 kubelet[3117]: E0423 00:04:11.258046 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:13.264778 containerd[1644]: time="2026-04-23T00:04:13.264228976Z" level=info msg="container event discarded" container=220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6 type=CONTAINER_STOPPED_EVENT Apr 23 00:04:15.159818 containerd[1644]: time="2026-04-23T00:04:15.158982328Z" level=info msg="container event discarded" container=e3f69d3000459fd4fbf7628244f9ec54de7da29a418a598f57a28842afeb5351 type=CONTAINER_DELETED_EVENT Apr 23 00:04:16.614041 kubelet[3117]: E0423 00:04:16.609927 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:17.776224 containerd[1644]: time="2026-04-23T00:04:17.775233960Z" level=info msg="container event discarded" container=2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19 type=CONTAINER_CREATED_EVENT Apr 23 00:04:18.105256 kubelet[3117]: E0423 00:04:18.086911 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.52s" Apr 23 00:04:21.780265 kubelet[3117]: E0423 00:04:21.779038 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:23.180351 containerd[1644]: time="2026-04-23T00:04:23.179072075Z" level=info msg="container event discarded" container=2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a type=CONTAINER_CREATED_EVENT Apr 23 00:04:23.874220 kubelet[3117]: E0423 00:04:23.873748 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.318s" Apr 23 00:04:27.201975 kubelet[3117]: E0423 00:04:26.986357 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:30.389060 kubelet[3117]: E0423 00:04:30.388235 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.79s" Apr 23 00:04:31.610285 kubelet[3117]: E0423 00:04:31.609014 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.06s" Apr 23 00:04:32.509060 kubelet[3117]: E0423 00:04:32.491162 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:34.847959 containerd[1644]: time="2026-04-23T00:04:34.847113971Z" level=info msg="container event discarded" container=c00f8a8443ee25b7ca0838b53fa56cb7875d4a41e95b9e39fc70a0422139daf8 type=CONTAINER_DELETED_EVENT Apr 23 00:04:35.206357 kubelet[3117]: E0423 00:04:35.202920 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:36.266006 containerd[1644]: time="2026-04-23T00:04:36.259388529Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:04:36.300063 containerd[1644]: time="2026-04-23T00:04:36.297407541Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29344049" Apr 23 00:04:36.650264 containerd[1644]: time="2026-04-23T00:04:36.590161821Z" level=info msg="container event discarded" container=2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19 type=CONTAINER_STARTED_EVENT Apr 23 00:04:36.770361 containerd[1644]: time="2026-04-23T00:04:36.768299870Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:04:37.265343 containerd[1644]: time="2026-04-23T00:04:37.264054584Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:04:37.570396 containerd[1644]: time="2026-04-23T00:04:37.563914463Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 1m2.100757392s" Apr 23 00:04:37.570396 containerd[1644]: time="2026-04-23T00:04:37.563984744Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 23 00:04:37.864221 kubelet[3117]: E0423 00:04:37.846025 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:38.283874 kubelet[3117]: E0423 00:04:38.268876 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.687s" Apr 23 00:04:38.659404 containerd[1644]: time="2026-04-23T00:04:38.657711215Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 23 00:04:39.404245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953093965.mount: Deactivated successfully. Apr 23 00:04:39.500233 containerd[1644]: time="2026-04-23T00:04:39.499028223Z" level=info msg="container event discarded" container=220db2584527a9f452aec23dd575abb06dea1ef705c6d1b97f7d3b5185470cc6 type=CONTAINER_DELETED_EVENT Apr 23 00:04:39.594203 containerd[1644]: time="2026-04-23T00:04:39.591243489Z" level=info msg="Container f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:04:39.949455 containerd[1644]: time="2026-04-23T00:04:39.948973770Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b\"" Apr 23 00:04:40.041556 kubelet[3117]: E0423 00:04:40.041102 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.77s" Apr 23 00:04:40.060322 containerd[1644]: time="2026-04-23T00:04:40.058775587Z" level=info msg="StartContainer for \"f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b\"" Apr 23 00:04:40.164817 containerd[1644]: time="2026-04-23T00:04:40.162938654Z" level=info msg="connecting to shim f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b" address="unix:///run/containerd/s/0d8b57edcd286969aa395915d1e096cb8a3a2f95eb77a2e7c4704af9a390361d" protocol=ttrpc version=3 Apr 23 00:04:40.440129 systemd[1]: cri-containerd-304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f.scope: Deactivated successfully. Apr 23 00:04:40.462266 systemd[1]: cri-containerd-304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f.scope: Consumed 24.028s CPU time, 36.4M memory peak. Apr 23 00:04:40.890142 containerd[1644]: time="2026-04-23T00:04:40.886124992Z" level=info msg="received container exit event container_id:\"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\" id:\"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\" pid:3470 exit_status:1 exited_at:{seconds:1776902680 nanos:845340187}" Apr 23 00:04:41.194531 containerd[1644]: time="2026-04-23T00:04:41.182038778Z" level=info msg="container event discarded" container=2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a type=CONTAINER_STARTED_EVENT Apr 23 00:04:41.295932 systemd[1]: cri-containerd-2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a.scope: Deactivated successfully. Apr 23 00:04:41.404321 systemd[1]: cri-containerd-2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a.scope: Consumed 1min 4.509s CPU time, 24.1M memory peak. Apr 23 00:04:41.452883 kubelet[3117]: E0423 00:04:41.443388 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:04:41.595658 containerd[1644]: time="2026-04-23T00:04:41.594237219Z" level=info msg="received container exit event container_id:\"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\" id:\"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\" pid:3236 exit_status:1 exited_at:{seconds:1776902681 nanos:443321626}" Apr 23 00:04:41.695210 kubelet[3117]: E0423 00:04:41.694078 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.1s" Apr 23 00:04:43.147243 kubelet[3117]: E0423 00:04:43.001550 3117 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 23 00:04:43.199983 systemd[1]: Started cri-containerd-f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b.scope - libcontainer container f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b. Apr 23 00:04:43.552915 kubelet[3117]: E0423 00:04:43.548192 3117 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:44.469914 kubelet[3117]: E0423 00:04:44.401291 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.809s" Apr 23 00:04:45.414179 kubelet[3117]: E0423 00:04:45.291281 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:46.298709 systemd[1]: cri-containerd-f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b.scope: Deactivated successfully. Apr 23 00:04:46.491283 containerd[1644]: time="2026-04-23T00:04:46.477849564Z" level=info msg="received container exit event container_id:\"f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b\" id:\"f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b\" pid:3748 exited_at:{seconds:1776902686 nanos:287875782}" Apr 23 00:04:46.576106 kubelet[3117]: E0423 00:04:46.562905 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.7s" Apr 23 00:04:46.764295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f-rootfs.mount: Deactivated successfully. Apr 23 00:04:46.910252 containerd[1644]: time="2026-04-23T00:04:46.906153304Z" level=info msg="StartContainer for \"f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b\" returns successfully" Apr 23 00:04:47.799690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a-rootfs.mount: Deactivated successfully. Apr 23 00:04:48.285538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b-rootfs.mount: Deactivated successfully. Apr 23 00:04:48.978686 kubelet[3117]: I0423 00:04:48.977872 3117 scope.go:122] "RemoveContainer" containerID="2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a" Apr 23 00:04:49.001118 kubelet[3117]: E0423 00:04:49.000346 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:49.100085 kubelet[3117]: E0423 00:04:49.099210 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 23 00:04:49.565030 kubelet[3117]: I0423 00:04:49.559178 3117 scope.go:122] "RemoveContainer" containerID="2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19" Apr 23 00:04:49.612737 kubelet[3117]: E0423 00:04:49.611498 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:49.884919 kubelet[3117]: I0423 00:04:49.879079 3117 scope.go:122] "RemoveContainer" containerID="304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f" Apr 23 00:04:50.005759 kubelet[3117]: E0423 00:04:49.916746 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:50.167229 kubelet[3117]: E0423 00:04:50.104366 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:04:50.614252 containerd[1644]: time="2026-04-23T00:04:50.584389955Z" level=info msg="RemoveContainer for \"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\"" Apr 23 00:04:50.994039 kubelet[3117]: E0423 00:04:50.900125 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:51.056046 containerd[1644]: time="2026-04-23T00:04:51.054542548Z" level=info msg="RemoveContainer for \"2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19\" returns successfully" Apr 23 00:04:52.174071 containerd[1644]: time="2026-04-23T00:04:52.172704540Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 23 00:04:52.758500 containerd[1644]: time="2026-04-23T00:04:52.757387559Z" level=info msg="Container 95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:04:53.123007 containerd[1644]: time="2026-04-23T00:04:53.119311431Z" level=info msg="CreateContainer within sandbox \"2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b\"" Apr 23 00:04:53.260340 containerd[1644]: time="2026-04-23T00:04:53.258399685Z" level=info msg="StartContainer for \"95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b\"" Apr 23 00:04:53.308402 kubelet[3117]: I0423 00:04:53.305151 3117 scope.go:122] "RemoveContainer" containerID="2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a" Apr 23 00:04:53.432177 kubelet[3117]: E0423 00:04:53.389159 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:53.446075 containerd[1644]: time="2026-04-23T00:04:53.406945790Z" level=info msg="connecting to shim 95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b" address="unix:///run/containerd/s/0d8b57edcd286969aa395915d1e096cb8a3a2f95eb77a2e7c4704af9a390361d" protocol=ttrpc version=3 Apr 23 00:04:54.089263 kubelet[3117]: E0423 00:04:54.089028 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Apr 23 00:04:54.189314 containerd[1644]: time="2026-04-23T00:04:54.188485338Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 23 00:04:55.486952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178339935.mount: Deactivated successfully. Apr 23 00:04:55.973895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954046392.mount: Deactivated successfully. Apr 23 00:04:56.021488 containerd[1644]: time="2026-04-23T00:04:56.021041187Z" level=info msg="Container 6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:04:56.248790 kubelet[3117]: E0423 00:04:56.237483 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.679s" Apr 23 00:04:56.295128 kubelet[3117]: I0423 00:04:56.292781 3117 scope.go:122] "RemoveContainer" containerID="304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f" Apr 23 00:04:56.364720 kubelet[3117]: E0423 00:04:56.364293 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:56.763072 kubelet[3117]: E0423 00:04:56.760913 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:04:57.501887 systemd[1]: Started cri-containerd-95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b.scope - libcontainer container 95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b. Apr 23 00:04:57.783343 kubelet[3117]: E0423 00:04:57.768531 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.219s" Apr 23 00:04:58.359126 containerd[1644]: time="2026-04-23T00:04:58.354105707Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\"" Apr 23 00:04:58.487975 containerd[1644]: time="2026-04-23T00:04:58.487797593Z" level=info msg="StartContainer for \"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\"" Apr 23 00:04:58.809269 containerd[1644]: time="2026-04-23T00:04:58.763652775Z" level=info msg="connecting to shim 6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" protocol=ttrpc version=3 Apr 23 00:04:59.814145 kubelet[3117]: E0423 00:04:59.777420 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.229s" Apr 23 00:05:00.575243 containerd[1644]: time="2026-04-23T00:05:00.553540694Z" level=info msg="StartContainer for \"95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b\" returns successfully" Apr 23 00:05:01.167087 systemd[1]: Started cri-containerd-6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff.scope - libcontainer container 6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff. Apr 23 00:05:01.720431 kubelet[3117]: E0423 00:05:01.714054 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Apr 23 00:05:01.964348 kubelet[3117]: I0423 00:05:01.962768 3117 scope.go:122] "RemoveContainer" containerID="304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f" Apr 23 00:05:02.083255 kubelet[3117]: E0423 00:05:02.062966 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:03.125021 containerd[1644]: time="2026-04-23T00:05:03.124346943Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 23 00:05:03.431275 kubelet[3117]: E0423 00:05:03.406227 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:03.796914 containerd[1644]: time="2026-04-23T00:05:03.793279825Z" level=error msg="get state for 6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" error="context deadline exceeded" Apr 23 00:05:03.796914 containerd[1644]: time="2026-04-23T00:05:03.793433003Z" level=warning msg="unknown status" status=0 Apr 23 00:05:03.845357 containerd[1644]: time="2026-04-23T00:05:03.841354610Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:05:04.248098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381772032.mount: Deactivated successfully. Apr 23 00:05:04.291263 containerd[1644]: time="2026-04-23T00:05:04.266365847Z" level=info msg="Container 593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:05:05.751378 containerd[1644]: time="2026-04-23T00:05:05.750179631Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\"" Apr 23 00:05:05.848249 containerd[1644]: time="2026-04-23T00:05:05.847265381Z" level=info msg="StartContainer for \"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\" returns successfully" Apr 23 00:05:05.959929 containerd[1644]: time="2026-04-23T00:05:05.957290762Z" level=info msg="StartContainer for \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\"" Apr 23 00:05:06.194400 containerd[1644]: time="2026-04-23T00:05:06.194078330Z" level=info msg="connecting to shim 593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 23 00:05:06.364301 kubelet[3117]: E0423 00:05:06.363330 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.806s" Apr 23 00:05:08.084953 kubelet[3117]: E0423 00:05:08.057353 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.468s" Apr 23 00:05:08.362507 systemd[1]: Started cri-containerd-593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b.scope - libcontainer container 593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b. Apr 23 00:05:08.566110 kubelet[3117]: E0423 00:05:08.565191 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:11.357310 kubelet[3117]: E0423 00:05:11.356946 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.788s" Apr 23 00:05:11.387349 kubelet[3117]: E0423 00:05:11.387017 3117 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice/cri-containerd-593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b.scope\": RecentStats: unable to find data in memory cache]" Apr 23 00:05:12.583539 containerd[1644]: time="2026-04-23T00:05:12.583061566Z" level=info msg="StartContainer for \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\" returns successfully" Apr 23 00:05:12.756085 kubelet[3117]: E0423 00:05:12.726190 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.357s" Apr 23 00:05:12.810094 kubelet[3117]: E0423 00:05:12.809296 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:13.918096 kubelet[3117]: E0423 00:05:13.916427 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:13.931170 kubelet[3117]: E0423 00:05:13.926331 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:15.984116 kubelet[3117]: E0423 00:05:15.983160 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:17.045908 kubelet[3117]: E0423 00:05:17.040389 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:24.302940 kubelet[3117]: I0423 00:05:24.272143 3117 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pkm95" podStartSLOduration=50.141686383 podStartE2EDuration="2m35.258379452s" podCreationTimestamp="2026-04-23 00:02:49 +0000 UTC" firstStartedPulling="2026-04-23 00:03:06.10584546 +0000 UTC m=+286.303678011" lastFinishedPulling="2026-04-23 00:04:51.222538522 +0000 UTC m=+391.420371080" observedRunningTime="2026-04-23 00:05:21.412170826 +0000 UTC m=+421.610003381" watchObservedRunningTime="2026-04-23 00:05:24.258379452 +0000 UTC m=+424.456212017" Apr 23 00:05:24.867065 systemd-networkd[1544]: flannel.1: Link UP Apr 23 00:05:24.871627 systemd-networkd[1544]: flannel.1: Gained carrier Apr 23 00:05:26.006215 systemd-networkd[1544]: flannel.1: Gained IPv6LL Apr 23 00:05:26.984379 kubelet[3117]: E0423 00:05:26.980919 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.403s" Apr 23 00:05:28.269087 kubelet[3117]: E0423 00:05:28.267340 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:30.351018 kubelet[3117]: E0423 00:05:30.345443 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:35.826885 kubelet[3117]: E0423 00:05:35.826186 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.27s" Apr 23 00:05:39.909995 kubelet[3117]: E0423 00:05:39.908324 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.255s" Apr 23 00:05:43.732857 kubelet[3117]: E0423 00:05:43.724098 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.144s" Apr 23 00:05:44.495991 kubelet[3117]: E0423 00:05:44.491078 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:44.839174 kubelet[3117]: E0423 00:05:44.826814 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:05:57.684236 kubelet[3117]: E0423 00:05:57.683820 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:09.748186 kubelet[3117]: E0423 00:06:09.747140 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.181s" Apr 23 00:06:16.780331 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:41560.service - OpenSSH per-connection server daemon (10.0.0.1:41560). Apr 23 00:06:17.534063 kubelet[3117]: I0423 00:06:17.533953 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8545a18a-baab-48d7-a897-8f9ac0fa40ac-config-volume\") pod \"coredns-7d764666f9-t66sb\" (UID: \"8545a18a-baab-48d7-a897-8f9ac0fa40ac\") " pod="kube-system/coredns-7d764666f9-t66sb" Apr 23 00:06:17.543031 kubelet[3117]: I0423 00:06:17.542936 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnftm\" (UniqueName: \"kubernetes.io/projected/8545a18a-baab-48d7-a897-8f9ac0fa40ac-kube-api-access-cnftm\") pod \"coredns-7d764666f9-t66sb\" (UID: \"8545a18a-baab-48d7-a897-8f9ac0fa40ac\") " pod="kube-system/coredns-7d764666f9-t66sb" Apr 23 00:06:17.733039 kubelet[3117]: I0423 00:06:17.732899 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxqw8\" (UniqueName: \"kubernetes.io/projected/ac974d3d-8139-4155-8d41-38e6c88e34b1-kube-api-access-nxqw8\") pod \"coredns-7d764666f9-6b57s\" (UID: \"ac974d3d-8139-4155-8d41-38e6c88e34b1\") " pod="kube-system/coredns-7d764666f9-6b57s" Apr 23 00:06:17.735865 kubelet[3117]: I0423 00:06:17.733988 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac974d3d-8139-4155-8d41-38e6c88e34b1-config-volume\") pod \"coredns-7d764666f9-6b57s\" (UID: \"ac974d3d-8139-4155-8d41-38e6c88e34b1\") " pod="kube-system/coredns-7d764666f9-6b57s" Apr 23 00:06:17.803227 systemd[1]: Created slice kubepods-burstable-pod8545a18a_baab_48d7_a897_8f9ac0fa40ac.slice - libcontainer container kubepods-burstable-pod8545a18a_baab_48d7_a897_8f9ac0fa40ac.slice. Apr 23 00:06:17.942098 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 41560 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:06:17.966408 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:06:18.495974 systemd-logind[1614]: New session 7 of user core. Apr 23 00:06:18.581328 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 23 00:06:18.647934 systemd[1]: Created slice kubepods-burstable-podac974d3d_8139_4155_8d41_38e6c88e34b1.slice - libcontainer container kubepods-burstable-podac974d3d_8139_4155_8d41_38e6c88e34b1.slice. Apr 23 00:06:19.417884 kubelet[3117]: E0423 00:06:19.417462 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:19.472027 containerd[1644]: time="2026-04-23T00:06:19.468507724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6b57s,Uid:ac974d3d-8139-4155-8d41-38e6c88e34b1,Namespace:kube-system,Attempt:0,}" Apr 23 00:06:21.212457 systemd-networkd[1544]: cni0: Link UP Apr 23 00:06:21.212728 systemd-networkd[1544]: cni0: Gained carrier Apr 23 00:06:21.662340 systemd-networkd[1544]: vethbc4d4482: Link UP Apr 23 00:06:21.688435 kubelet[3117]: E0423 00:06:21.662408 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:21.900287 kernel: cni0: port 1(vethbc4d4482) entered blocking state Apr 23 00:06:22.010357 kernel: cni0: port 1(vethbc4d4482) entered disabled state Apr 23 00:06:22.054304 kernel: vethbc4d4482: entered allmulticast mode Apr 23 00:06:22.080456 kernel: vethbc4d4482: entered promiscuous mode Apr 23 00:06:22.119131 systemd-networkd[1544]: cni0: Lost carrier Apr 23 00:06:22.658061 containerd[1644]: time="2026-04-23T00:06:22.655293197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t66sb,Uid:8545a18a-baab-48d7-a897-8f9ac0fa40ac,Namespace:kube-system,Attempt:0,}" Apr 23 00:06:22.883379 systemd-networkd[1544]: cni0: Gained IPv6LL Apr 23 00:06:24.029156 kernel: cni0: port 1(vethbc4d4482) entered blocking state Apr 23 00:06:24.056320 kernel: cni0: port 1(vethbc4d4482) entered forwarding state Apr 23 00:06:24.254279 systemd-networkd[1544]: vethbc4d4482: Gained carrier Apr 23 00:06:24.396472 systemd-networkd[1544]: cni0: Gained carrier Apr 23 00:06:24.636100 containerd[1644]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000124c0), "name":"cbr0", "type":"bridge"} Apr 23 00:06:24.636100 containerd[1644]: delegateAdd: netconf sent to delegate plugin: Apr 23 00:06:25.376496 systemd-networkd[1544]: vethbc4d4482: Gained IPv6LL Apr 23 00:06:26.869407 systemd-networkd[1544]: veth4199876e: Link UP Apr 23 00:06:27.044136 kernel: cni0: port 2(veth4199876e) entered blocking state Apr 23 00:06:27.066379 kernel: cni0: port 2(veth4199876e) entered disabled state Apr 23 00:06:27.078191 kernel: veth4199876e: entered allmulticast mode Apr 23 00:06:27.111767 kernel: veth4199876e: entered promiscuous mode Apr 23 00:06:28.354282 kernel: cni0: port 2(veth4199876e) entered blocking state Apr 23 00:06:28.361335 kernel: cni0: port 2(veth4199876e) entered forwarding state Apr 23 00:06:28.359419 systemd-networkd[1544]: veth4199876e: Gained carrier Apr 23 00:06:28.599796 containerd[1644]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 23 00:06:28.599796 containerd[1644]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000127e0), "name":"cbr0", "type":"bridge"} Apr 23 00:06:28.599796 containerd[1644]: delegateAdd: netconf sent to delegate plugin: Apr 23 00:06:29.515086 containerd[1644]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-23T00:06:29.501492454Z" level=info msg="connecting to shim 0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc" address="unix:///run/containerd/s/67c69be641990521dabf9dd1bb45c56b8de1cb5c52746630539d0c0f71b3a252" namespace=k8s.io protocol=ttrpc version=3 Apr 23 00:06:29.969990 systemd-networkd[1544]: veth4199876e: Gained IPv6LL Apr 23 00:06:30.694414 kubelet[3117]: E0423 00:06:30.693931 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.722s" Apr 23 00:06:32.164730 kubelet[3117]: E0423 00:06:32.163465 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:33.491145 kubelet[3117]: E0423 00:06:33.471523 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.738s" Apr 23 00:06:33.918510 systemd[1]: Started cri-containerd-0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc.scope - libcontainer container 0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc. Apr 23 00:06:34.269013 containerd[1644]: time="2026-04-23T00:06:34.264222268Z" level=info msg="connecting to shim f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240" address="unix:///run/containerd/s/b6a7c28ee66fd58e6a83577810191ba30a706b59b995f86ed378a15f32af52ac" namespace=k8s.io protocol=ttrpc version=3 Apr 23 00:06:35.367980 kubelet[3117]: E0423 00:06:35.308163 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.7s" Apr 23 00:06:35.707523 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 23 00:06:36.584238 kubelet[3117]: E0423 00:06:36.583492 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:36.896996 systemd[1]: Started cri-containerd-f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240.scope - libcontainer container f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240. Apr 23 00:06:36.901926 containerd[1644]: time="2026-04-23T00:06:36.900823448Z" level=error msg="get state for 0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc" error="context deadline exceeded" Apr 23 00:06:36.901926 containerd[1644]: time="2026-04-23T00:06:36.901474609Z" level=warning msg="unknown status" status=0 Apr 23 00:06:37.770074 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 23 00:06:38.056363 systemd[1]: cri-containerd-6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff.scope: Deactivated successfully. Apr 23 00:06:38.069251 systemd[1]: cri-containerd-6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff.scope: Consumed 23.551s CPU time, 20M memory peak. Apr 23 00:06:38.264485 containerd[1644]: time="2026-04-23T00:06:38.256477151Z" level=info msg="received container exit event container_id:\"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\" id:\"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\" pid:3839 exit_status:1 exited_at:{seconds:1776902798 nanos:172214490}" Apr 23 00:06:39.200088 containerd[1644]: time="2026-04-23T00:06:39.199167104Z" level=error msg="get state for 0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc" error="context deadline exceeded" Apr 23 00:06:39.211514 containerd[1644]: time="2026-04-23T00:06:39.207246906Z" level=warning msg="unknown status" status=0 Apr 23 00:06:39.546210 containerd[1644]: time="2026-04-23T00:06:39.532052872Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:06:39.714225 containerd[1644]: time="2026-04-23T00:06:39.683407158Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 23 00:06:40.296297 kubelet[3117]: E0423 00:06:40.214977 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.644s" Apr 23 00:06:40.578997 kubelet[3117]: E0423 00:06:40.565052 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:41.651078 containerd[1644]: time="2026-04-23T00:06:41.649053771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t66sb,Uid:8545a18a-baab-48d7-a897-8f9ac0fa40ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240\"" Apr 23 00:06:41.706503 sshd[4101]: Connection closed by 10.0.0.1 port 41560 Apr 23 00:06:41.710349 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Apr 23 00:06:41.752523 kubelet[3117]: E0423 00:06:41.752326 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:41.757285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff-rootfs.mount: Deactivated successfully. Apr 23 00:06:41.775320 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:41560.service: Deactivated successfully. Apr 23 00:06:41.810257 systemd[1]: session-7.scope: Deactivated successfully. Apr 23 00:06:41.820401 systemd[1]: session-7.scope: Consumed 6.210s CPU time, 18.1M memory peak. Apr 23 00:06:41.821096 systemd[1]: cri-containerd-593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b.scope: Deactivated successfully. Apr 23 00:06:41.821397 systemd[1]: cri-containerd-593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b.scope: Consumed 21.989s CPU time, 46.3M memory peak, 4K read from disk. Apr 23 00:06:41.859265 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Apr 23 00:06:41.998015 containerd[1644]: time="2026-04-23T00:06:41.971209191Z" level=info msg="received container exit event container_id:\"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\" id:\"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\" pid:3873 exit_status:1 exited_at:{seconds:1776902801 nanos:862482689}" Apr 23 00:06:41.971475 systemd-logind[1614]: Removed session 7. Apr 23 00:06:41.998534 containerd[1644]: time="2026-04-23T00:06:41.998427790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6b57s,Uid:ac974d3d-8139-4155-8d41-38e6c88e34b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc\"" Apr 23 00:06:42.235053 containerd[1644]: time="2026-04-23T00:06:42.214516007Z" level=info msg="CreateContainer within sandbox \"f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 00:06:42.274904 kubelet[3117]: E0423 00:06:42.262472 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:42.776518 containerd[1644]: time="2026-04-23T00:06:42.776347635Z" level=info msg="CreateContainer within sandbox \"0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 00:06:42.965039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2071914024.mount: Deactivated successfully. Apr 23 00:06:43.068430 containerd[1644]: time="2026-04-23T00:06:43.000470631Z" level=info msg="Container 287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:06:43.065959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721365015.mount: Deactivated successfully. Apr 23 00:06:43.252465 containerd[1644]: time="2026-04-23T00:06:43.252206245Z" level=info msg="CreateContainer within sandbox \"f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576\"" Apr 23 00:06:43.400111 containerd[1644]: time="2026-04-23T00:06:43.398937496Z" level=info msg="StartContainer for \"287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576\"" Apr 23 00:06:43.747521 containerd[1644]: time="2026-04-23T00:06:43.743642861Z" level=info msg="connecting to shim 287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576" address="unix:///run/containerd/s/b6a7c28ee66fd58e6a83577810191ba30a706b59b995f86ed378a15f32af52ac" protocol=ttrpc version=3 Apr 23 00:06:43.861502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356093365.mount: Deactivated successfully. Apr 23 00:06:43.982016 containerd[1644]: time="2026-04-23T00:06:43.981755367Z" level=info msg="Container 2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:06:44.011665 kubelet[3117]: I0423 00:06:44.010001 3117 scope.go:122] "RemoveContainer" containerID="2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a" Apr 23 00:06:44.079469 kubelet[3117]: I0423 00:06:44.078010 3117 scope.go:122] "RemoveContainer" containerID="6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" Apr 23 00:06:44.081681 kubelet[3117]: E0423 00:06:44.081623 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:44.081915 kubelet[3117]: E0423 00:06:44.081897 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 23 00:06:44.108305 containerd[1644]: time="2026-04-23T00:06:44.108050085Z" level=info msg="CreateContainer within sandbox \"0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a\"" Apr 23 00:06:44.111508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b-rootfs.mount: Deactivated successfully. Apr 23 00:06:44.196819 containerd[1644]: time="2026-04-23T00:06:44.164535751Z" level=info msg="StartContainer for \"2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a\"" Apr 23 00:06:44.282278 containerd[1644]: time="2026-04-23T00:06:44.280227509Z" level=info msg="connecting to shim 2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a" address="unix:///run/containerd/s/67c69be641990521dabf9dd1bb45c56b8de1cb5c52746630539d0c0f71b3a252" protocol=ttrpc version=3 Apr 23 00:06:44.282278 containerd[1644]: time="2026-04-23T00:06:44.280427202Z" level=info msg="RemoveContainer for \"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\"" Apr 23 00:06:44.349433 containerd[1644]: time="2026-04-23T00:06:44.349289431Z" level=info msg="RemoveContainer for \"2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a\" returns successfully" Apr 23 00:06:44.431191 systemd[1]: Started cri-containerd-287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576.scope - libcontainer container 287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576. Apr 23 00:06:44.542237 systemd[1]: Started cri-containerd-2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a.scope - libcontainer container 2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a. Apr 23 00:06:44.659363 containerd[1644]: time="2026-04-23T00:06:44.659215880Z" level=info msg="StartContainer for \"287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576\" returns successfully" Apr 23 00:06:44.847949 containerd[1644]: time="2026-04-23T00:06:44.836536870Z" level=info msg="StartContainer for \"2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a\" returns successfully" Apr 23 00:06:45.402974 kubelet[3117]: E0423 00:06:45.402709 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:45.695839 kubelet[3117]: E0423 00:06:45.691364 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:45.761959 kubelet[3117]: I0423 00:06:45.761217 3117 scope.go:122] "RemoveContainer" containerID="304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f" Apr 23 00:06:45.765431 kubelet[3117]: I0423 00:06:45.765367 3117 scope.go:122] "RemoveContainer" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:06:45.765861 kubelet[3117]: E0423 00:06:45.765500 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:45.766245 kubelet[3117]: E0423 00:06:45.766162 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:06:45.903198 containerd[1644]: time="2026-04-23T00:06:45.901316475Z" level=info msg="RemoveContainer for \"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\"" Apr 23 00:06:45.941779 containerd[1644]: time="2026-04-23T00:06:45.941041907Z" level=info msg="RemoveContainer for \"304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f\" returns successfully" Apr 23 00:06:46.099940 kubelet[3117]: I0423 00:06:46.085403 3117 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-t66sb" podStartSLOduration=237.085381203 podStartE2EDuration="3m57.085381203s" podCreationTimestamp="2026-04-23 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 00:06:46.085223576 +0000 UTC m=+506.283056134" watchObservedRunningTime="2026-04-23 00:06:46.085381203 +0000 UTC m=+506.283213767" Apr 23 00:06:46.966962 kubelet[3117]: I0423 00:06:46.966680 3117 scope.go:122] "RemoveContainer" containerID="6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" Apr 23 00:06:46.977471 kubelet[3117]: E0423 00:06:46.969254 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:47.007129 kubelet[3117]: E0423 00:06:47.002170 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 23 00:06:47.033862 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:59308.service - OpenSSH per-connection server daemon (10.0.0.1:59308). Apr 23 00:06:47.196346 kubelet[3117]: E0423 00:06:47.193157 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:47.260047 kubelet[3117]: I0423 00:06:47.240439 3117 scope.go:122] "RemoveContainer" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:06:47.260047 kubelet[3117]: E0423 00:06:47.243289 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:47.273208 kubelet[3117]: E0423 00:06:47.262339 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:06:47.360494 kubelet[3117]: E0423 00:06:47.354516 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:47.642256 kubelet[3117]: E0423 00:06:47.641901 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:48.172480 sshd[4479]: Accepted publickey for core from 10.0.0.1 port 59308 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:06:48.243080 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:06:48.863185 systemd-logind[1614]: New session 8 of user core. Apr 23 00:06:49.010306 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 23 00:06:49.051536 kubelet[3117]: E0423 00:06:49.050691 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:49.334159 kubelet[3117]: E0423 00:06:49.323327 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:49.660868 kubelet[3117]: E0423 00:06:49.658254 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.069s" Apr 23 00:06:50.564239 kubelet[3117]: I0423 00:06:50.558521 3117 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6b57s" podStartSLOduration=240.558440523 podStartE2EDuration="4m0.558440523s" podCreationTimestamp="2026-04-23 00:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 00:06:48.5599365 +0000 UTC m=+508.757769064" watchObservedRunningTime="2026-04-23 00:06:50.558440523 +0000 UTC m=+510.756273085" Apr 23 00:06:55.473362 sshd[4496]: Connection closed by 10.0.0.1 port 59308 Apr 23 00:06:55.477498 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Apr 23 00:06:55.562025 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:59308.service: Deactivated successfully. Apr 23 00:06:55.607044 systemd[1]: session-8.scope: Deactivated successfully. Apr 23 00:06:55.609265 systemd[1]: session-8.scope: Consumed 4.210s CPU time, 16.1M memory peak. Apr 23 00:06:55.618286 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Apr 23 00:06:55.622413 systemd-logind[1614]: Removed session 8. Apr 23 00:07:00.671013 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:52580.service - OpenSSH per-connection server daemon (10.0.0.1:52580). Apr 23 00:07:02.886778 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 52580 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:03.009382 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:03.423500 systemd-logind[1614]: New session 9 of user core. Apr 23 00:07:03.569134 kubelet[3117]: I0423 00:07:03.565484 3117 scope.go:122] "RemoveContainer" containerID="6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" Apr 23 00:07:03.575151 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 00:07:03.591078 kubelet[3117]: E0423 00:07:03.587534 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:03.627892 kubelet[3117]: E0423 00:07:03.627790 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:03.797276 containerd[1644]: time="2026-04-23T00:07:03.780359429Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 23 00:07:04.009062 containerd[1644]: time="2026-04-23T00:07:04.006681108Z" level=info msg="Container e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:07:04.064836 containerd[1644]: time="2026-04-23T00:07:04.061841986Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd\"" Apr 23 00:07:04.089686 containerd[1644]: time="2026-04-23T00:07:04.089023826Z" level=info msg="StartContainer for \"e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd\"" Apr 23 00:07:04.115445 containerd[1644]: time="2026-04-23T00:07:04.115095461Z" level=info msg="connecting to shim e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" protocol=ttrpc version=3 Apr 23 00:07:04.566384 systemd[1]: Started cri-containerd-e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd.scope - libcontainer container e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd. Apr 23 00:07:06.255918 containerd[1644]: time="2026-04-23T00:07:06.252806053Z" level=info msg="StartContainer for \"e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd\" returns successfully" Apr 23 00:07:08.132235 kubelet[3117]: E0423 00:07:08.129731 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:09.005311 sshd[4567]: Connection closed by 10.0.0.1 port 52580 Apr 23 00:07:09.029440 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Apr 23 00:07:09.157814 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:52580.service: Deactivated successfully. Apr 23 00:07:09.202364 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:52580.service: Consumed 1.032s CPU time, 4.2M memory peak. Apr 23 00:07:09.299995 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 00:07:09.315478 systemd[1]: session-9.scope: Consumed 2.623s CPU time, 16.3M memory peak. Apr 23 00:07:09.341248 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Apr 23 00:07:09.483350 systemd-logind[1614]: Removed session 9. Apr 23 00:07:09.500384 kubelet[3117]: E0423 00:07:09.491440 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:14.154554 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Apr 23 00:07:16.064484 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:16.136966 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:16.404967 systemd-logind[1614]: New session 10 of user core. Apr 23 00:07:16.533514 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 00:07:16.953454 kubelet[3117]: E0423 00:07:16.948223 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:17.251814 kubelet[3117]: E0423 00:07:17.251140 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:18.563077 kubelet[3117]: E0423 00:07:18.554319 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:22.556455 sshd[4665]: Connection closed by 10.0.0.1 port 46770 Apr 23 00:07:22.558164 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Apr 23 00:07:22.599168 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:46770.service: Deactivated successfully. Apr 23 00:07:22.788904 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 00:07:22.795989 systemd[1]: session-10.scope: Consumed 4.200s CPU time, 15.5M memory peak. Apr 23 00:07:22.853375 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Apr 23 00:07:22.889085 systemd-logind[1614]: Removed session 10. Apr 23 00:07:28.066925 kubelet[3117]: E0423 00:07:28.058409 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.499s" Apr 23 00:07:28.352161 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Apr 23 00:07:30.028052 sshd[4719]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:30.056908 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:30.612214 kubelet[3117]: E0423 00:07:30.467353 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.902s" Apr 23 00:07:30.764160 systemd-logind[1614]: New session 11 of user core. Apr 23 00:07:30.902991 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 00:07:31.900003 kubelet[3117]: E0423 00:07:31.897477 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.201s" Apr 23 00:07:36.568332 kubelet[3117]: I0423 00:07:36.565353 3117 scope.go:122] "RemoveContainer" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:07:36.568332 kubelet[3117]: E0423 00:07:36.570541 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:37.085531 containerd[1644]: time="2026-04-23T00:07:37.084008084Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:6,}" Apr 23 00:07:37.976066 containerd[1644]: time="2026-04-23T00:07:37.975352269Z" level=info msg="Container 7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:07:38.012925 kubelet[3117]: E0423 00:07:38.009366 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.446s" Apr 23 00:07:38.317001 containerd[1644]: time="2026-04-23T00:07:38.309244862Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:6,} returns container id \"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\"" Apr 23 00:07:38.482992 containerd[1644]: time="2026-04-23T00:07:38.482331605Z" level=info msg="StartContainer for \"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\"" Apr 23 00:07:38.825307 containerd[1644]: time="2026-04-23T00:07:38.825050261Z" level=info msg="connecting to shim 7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 23 00:07:39.852169 kubelet[3117]: E0423 00:07:39.851297 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.132s" Apr 23 00:07:41.583506 systemd[1]: Started cri-containerd-7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab.scope - libcontainer container 7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab. Apr 23 00:07:42.392334 kubelet[3117]: E0423 00:07:42.389061 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.812s" Apr 23 00:07:44.052439 kubelet[3117]: E0423 00:07:43.985472 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.407s" Apr 23 00:07:45.631183 containerd[1644]: time="2026-04-23T00:07:45.628470049Z" level=info msg="StartContainer for \"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\" returns successfully" Apr 23 00:07:48.563284 sshd[4734]: Connection closed by 10.0.0.1 port 55826 Apr 23 00:07:48.581784 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Apr 23 00:07:48.901360 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:55826.service: Deactivated successfully. Apr 23 00:07:49.103150 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 00:07:49.106453 systemd[1]: session-11.scope: Consumed 5.952s CPU time, 15.3M memory peak. Apr 23 00:07:49.136422 kubelet[3117]: E0423 00:07:49.135536 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.446s" Apr 23 00:07:49.160713 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Apr 23 00:07:49.343366 systemd-logind[1614]: Removed session 11. Apr 23 00:07:50.710137 kubelet[3117]: E0423 00:07:50.709158 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.542s" Apr 23 00:07:51.431247 kubelet[3117]: E0423 00:07:51.430903 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:53.933979 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:36862.service - OpenSSH per-connection server daemon (10.0.0.1:36862). Apr 23 00:07:56.299786 kubelet[3117]: E0423 00:07:56.237330 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.665s" Apr 23 00:07:56.532995 kubelet[3117]: E0423 00:07:56.501275 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:07:56.690088 kubelet[3117]: E0423 00:07:56.687409 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:56.810993 systemd[1]: cri-containerd-e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd.scope: Deactivated successfully. Apr 23 00:07:56.882208 systemd[1]: cri-containerd-e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd.scope: Consumed 8.836s CPU time, 21M memory peak. Apr 23 00:07:56.938477 containerd[1644]: time="2026-04-23T00:07:56.901394808Z" level=info msg="received container exit event container_id:\"e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd\" id:\"e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd\" pid:4593 exit_status:1 exited_at:{seconds:1776902876 nanos:872416834}" Apr 23 00:07:58.183300 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 36862 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:58.567060 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:59.213311 systemd-logind[1614]: New session 12 of user core. Apr 23 00:07:59.377183 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 00:08:02.995177 kubelet[3117]: E0423 00:08:02.993347 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.293s" Apr 23 00:08:03.735170 kubelet[3117]: E0423 00:08:03.734320 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:03.754280 kubelet[3117]: E0423 00:08:03.752191 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:04.054452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd-rootfs.mount: Deactivated successfully. Apr 23 00:08:04.910374 kubelet[3117]: E0423 00:08:04.902452 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.881s" Apr 23 00:08:05.638025 containerd[1644]: time="2026-04-23T00:08:05.631381293Z" level=info msg="container event discarded" container=2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805 type=CONTAINER_CREATED_EVENT Apr 23 00:08:05.638025 containerd[1644]: time="2026-04-23T00:08:05.631829269Z" level=info msg="container event discarded" container=2ccfcca9a5f7ed526012346ba67cee1a88ef2da98db5f6130e5433f17d6ff805 type=CONTAINER_STARTED_EVENT Apr 23 00:08:05.954813 containerd[1644]: time="2026-04-23T00:08:05.952706740Z" level=info msg="container event discarded" container=2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19 type=CONTAINER_STOPPED_EVENT Apr 23 00:08:06.884331 kubelet[3117]: E0423 00:08:06.763394 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:08:08.458142 containerd[1644]: time="2026-04-23T00:08:08.455181812Z" level=info msg="container event discarded" container=bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4 type=CONTAINER_CREATED_EVENT Apr 23 00:08:08.458142 containerd[1644]: time="2026-04-23T00:08:08.455413127Z" level=info msg="container event discarded" container=bf5cddb6b1e3b415d184f81797269cbfea1fd7cb7dc4adefee0993cdf6bc56c4 type=CONTAINER_STARTED_EVENT Apr 23 00:08:09.410458 kubelet[3117]: E0423 00:08:09.244549 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.231s" Apr 23 00:08:09.984913 kubelet[3117]: I0423 00:08:09.972148 3117 scope.go:122] "RemoveContainer" containerID="6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff" Apr 23 00:08:10.155320 kubelet[3117]: I0423 00:08:10.151522 3117 scope.go:122] "RemoveContainer" containerID="e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd" Apr 23 00:08:10.471315 containerd[1644]: time="2026-04-23T00:08:10.469364413Z" level=info msg="container event discarded" container=1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3 type=CONTAINER_CREATED_EVENT Apr 23 00:08:10.533350 kubelet[3117]: E0423 00:08:10.531248 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:10.691151 kubelet[3117]: E0423 00:08:10.690153 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 23 00:08:12.180999 containerd[1644]: time="2026-04-23T00:08:12.178763274Z" level=info msg="RemoveContainer for \"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\"" Apr 23 00:08:12.396221 kubelet[3117]: E0423 00:08:12.395541 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.612s" Apr 23 00:08:12.798319 containerd[1644]: time="2026-04-23T00:08:12.796372689Z" level=info msg="RemoveContainer for \"6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff\" returns successfully" Apr 23 00:08:13.086382 kubelet[3117]: E0423 00:08:13.066521 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:13.840175 kubelet[3117]: E0423 00:08:13.837339 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.37s" Apr 23 00:08:14.375115 containerd[1644]: time="2026-04-23T00:08:14.374283282Z" level=info msg="container event discarded" container=1917051bed70f4cb581529da4cc0abb060f039c780f1bbb36e2f4bab0ec01cd3 type=CONTAINER_STARTED_EVENT Apr 23 00:08:14.931442 kubelet[3117]: E0423 00:08:14.920553 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.041s" Apr 23 00:08:15.101798 kubelet[3117]: E0423 00:08:15.101313 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:15.106123 kubelet[3117]: E0423 00:08:15.103056 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:15.488204 containerd[1644]: time="2026-04-23T00:08:15.479367659Z" level=info msg="container event discarded" container=304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f type=CONTAINER_CREATED_EVENT Apr 23 00:08:16.954486 kubelet[3117]: E0423 00:08:16.952132 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:08:17.595235 kubelet[3117]: I0423 00:08:17.592419 3117 scope.go:122] "RemoveContainer" containerID="e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd" Apr 23 00:08:17.684517 kubelet[3117]: E0423 00:08:17.681929 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:17.716130 kubelet[3117]: E0423 00:08:17.712336 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 23 00:08:17.734547 kubelet[3117]: E0423 00:08:17.729343 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:22.403407 kubelet[3117]: E0423 00:08:22.400866 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.817s" Apr 23 00:08:22.607526 kubelet[3117]: E0423 00:08:22.603950 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:22.856089 containerd[1644]: time="2026-04-23T00:08:22.844456582Z" level=info msg="container event discarded" container=304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f type=CONTAINER_STARTED_EVENT Apr 23 00:08:27.323302 kubelet[3117]: E0423 00:08:27.320352 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:08:27.396539 kubelet[3117]: E0423 00:08:27.323518 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:28.121225 kubelet[3117]: E0423 00:08:28.118409 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.489s" Apr 23 00:08:29.167401 containerd[1644]: time="2026-04-23T00:08:29.137452604Z" level=info msg="container event discarded" container=7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638 type=CONTAINER_CREATED_EVENT Apr 23 00:08:31.783264 containerd[1644]: time="2026-04-23T00:08:31.775492850Z" level=info msg="container event discarded" container=7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638 type=CONTAINER_STARTED_EVENT Apr 23 00:08:33.314151 containerd[1644]: time="2026-04-23T00:08:33.306393423Z" level=info msg="container event discarded" container=7cd94f43cd93946013f83343257972a8cfef99952ecfc270e74742ca099bc638 type=CONTAINER_STOPPED_EVENT Apr 23 00:08:34.610311 sshd[4862]: Connection closed by 10.0.0.1 port 36862 Apr 23 00:08:34.626011 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Apr 23 00:08:34.979519 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:36862.service: Deactivated successfully. Apr 23 00:08:35.013974 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:36862.service: Consumed 1.274s CPU time, 4M memory peak. Apr 23 00:08:35.245331 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 00:08:35.266752 systemd[1]: session-12.scope: Consumed 8.556s CPU time, 15.5M memory peak. Apr 23 00:08:35.464131 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Apr 23 00:08:35.555826 systemd-logind[1614]: Removed session 12. Apr 23 00:08:35.998062 kubelet[3117]: E0423 00:08:35.996385 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.133s" Apr 23 00:08:37.284452 kubelet[3117]: E0423 00:08:37.248411 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.194s" Apr 23 00:08:37.593409 kubelet[3117]: E0423 00:08:37.576947 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:08:37.593409 kubelet[3117]: I0423 00:08:37.577332 3117 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 23 00:08:38.278303 kubelet[3117]: E0423 00:08:38.263341 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.011s" Apr 23 00:08:39.685967 kubelet[3117]: E0423 00:08:39.684433 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 23 00:08:40.546295 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:51006.service - OpenSSH per-connection server daemon (10.0.0.1:51006). Apr 23 00:08:44.425152 kubelet[3117]: E0423 00:08:44.424457 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.865s" Apr 23 00:08:46.909321 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 51006 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:08:46.993396 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:08:47.925040 systemd-logind[1614]: New session 13 of user core. Apr 23 00:08:47.939460 kubelet[3117]: E0423 00:08:47.937477 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 23 00:08:48.140981 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 00:08:49.785550 kubelet[3117]: E0423 00:08:49.464113 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.891s" Apr 23 00:08:50.057179 kubelet[3117]: I0423 00:08:50.048785 3117 scope.go:122] "RemoveContainer" containerID="e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd" Apr 23 00:08:50.057179 kubelet[3117]: E0423 00:08:50.049048 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:51.674357 containerd[1644]: time="2026-04-23T00:08:51.673172723Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 23 00:08:52.596552 kubelet[3117]: E0423 00:08:52.595395 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.735s" Apr 23 00:08:52.961253 containerd[1644]: time="2026-04-23T00:08:52.934420339Z" level=info msg="Container c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:08:53.479203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207501646.mount: Deactivated successfully. Apr 23 00:08:53.893976 containerd[1644]: time="2026-04-23T00:08:53.888968550Z" level=info msg="CreateContainer within sandbox \"c1b5349945d1a2228b70d5ed9e338ee60bce5f807f5a4f066d7523daa1d1b86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123\"" Apr 23 00:08:54.579755 containerd[1644]: time="2026-04-23T00:08:54.579404436Z" level=info msg="StartContainer for \"c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123\"" Apr 23 00:08:54.676174 containerd[1644]: time="2026-04-23T00:08:54.665533155Z" level=info msg="connecting to shim c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123" address="unix:///run/containerd/s/3a5875e2ffea8b52b40e1376d493fb4d81e0bcbfc3fa4f4f720193f542909548" protocol=ttrpc version=3 Apr 23 00:08:56.293260 kubelet[3117]: E0423 00:08:56.275517 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.678s" Apr 23 00:08:56.610363 systemd[1]: Started cri-containerd-c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123.scope - libcontainer container c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123. Apr 23 00:08:57.714256 kubelet[3117]: E0423 00:08:57.713275 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.198s" Apr 23 00:08:58.698136 kubelet[3117]: E0423 00:08:58.697419 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:58.943336 sshd[5003]: Connection closed by 10.0.0.1 port 51006 Apr 23 00:08:58.957018 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Apr 23 00:08:59.359126 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:51006.service: Deactivated successfully. Apr 23 00:08:59.386395 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:51006.service: Consumed 1.340s CPU time, 4.2M memory peak. Apr 23 00:08:59.598836 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 00:08:59.610947 systemd[1]: session-13.scope: Consumed 5.534s CPU time, 14.5M memory peak. Apr 23 00:08:59.629978 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Apr 23 00:08:59.647406 systemd-logind[1614]: Removed session 13. Apr 23 00:09:01.675419 kubelet[3117]: E0423 00:09:01.663182 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 23 00:09:01.951467 containerd[1644]: time="2026-04-23T00:09:01.931302731Z" level=info msg="StartContainer for \"c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123\" returns successfully" Apr 23 00:09:03.730661 kubelet[3117]: E0423 00:09:03.730213 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:04.286268 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:44232.service - OpenSSH per-connection server daemon (10.0.0.1:44232). Apr 23 00:09:05.148196 kubelet[3117]: E0423 00:09:05.137172 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:06.752379 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 44232 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:09:06.844282 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:09:07.254007 kubelet[3117]: E0423 00:09:07.242542 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:07.731858 systemd-logind[1614]: New session 14 of user core. Apr 23 00:09:07.784966 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 00:09:12.922393 sshd[5113]: Connection closed by 10.0.0.1 port 44232 Apr 23 00:09:12.940533 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Apr 23 00:09:13.175325 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:44232.service: Deactivated successfully. Apr 23 00:09:13.318448 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 00:09:13.361947 systemd[1]: session-14.scope: Consumed 3.177s CPU time, 14.4M memory peak. Apr 23 00:09:13.474357 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Apr 23 00:09:13.482804 systemd-logind[1614]: Removed session 14. Apr 23 00:09:13.511121 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:34190.service - OpenSSH per-connection server daemon (10.0.0.1:34190). Apr 23 00:09:15.302457 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 34190 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:09:15.339952 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:09:15.604021 kubelet[3117]: E0423 00:09:15.581268 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.01s" Apr 23 00:09:16.052920 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 00:09:16.061382 systemd-logind[1614]: New session 15 of user core. Apr 23 00:09:17.405869 kubelet[3117]: E0423 00:09:17.401411 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:18.807547 kubelet[3117]: E0423 00:09:18.788472 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:19.168521 kubelet[3117]: E0423 00:09:19.168374 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:21.766232 kubelet[3117]: E0423 00:09:21.764487 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.07s" Apr 23 00:09:23.711026 kubelet[3117]: E0423 00:09:23.708251 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.161s" Apr 23 00:09:28.286267 sshd[5161]: Connection closed by 10.0.0.1 port 34190 Apr 23 00:09:28.314112 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Apr 23 00:09:28.354698 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:33490.service - OpenSSH per-connection server daemon (10.0.0.1:33490). Apr 23 00:09:28.482405 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:34190.service: Deactivated successfully. Apr 23 00:09:28.641969 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 00:09:28.645216 systemd[1]: session-15.scope: Consumed 5.414s CPU time, 23.3M memory peak. Apr 23 00:09:28.680508 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Apr 23 00:09:28.816094 systemd-logind[1614]: Removed session 15. Apr 23 00:09:30.868402 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 33490 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:09:30.975469 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:09:31.283827 systemd-logind[1614]: New session 16 of user core. Apr 23 00:09:31.346365 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 00:09:31.693445 kubelet[3117]: E0423 00:09:31.680940 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:37.806899 kubelet[3117]: E0423 00:09:37.806497 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.199s" Apr 23 00:09:38.271237 kubelet[3117]: E0423 00:09:38.268090 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:39.883176 containerd[1644]: time="2026-04-23T00:09:39.882009071Z" level=info msg="container event discarded" container=f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b type=CONTAINER_CREATED_EVENT Apr 23 00:09:41.012138 kubelet[3117]: E0423 00:09:41.002357 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.439s" Apr 23 00:09:41.326260 kubelet[3117]: E0423 00:09:41.310514 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:42.509533 kubelet[3117]: E0423 00:09:42.507259 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.389s" Apr 23 00:09:42.978316 kubelet[3117]: E0423 00:09:42.975949 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:09:44.686543 systemd[1]: cri-containerd-7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab.scope: Deactivated successfully. Apr 23 00:09:44.813782 systemd[1]: cri-containerd-7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab.scope: Consumed 16.259s CPU time, 26.5M memory peak. Apr 23 00:09:44.973964 containerd[1644]: time="2026-04-23T00:09:44.960557078Z" level=info msg="received container exit event container_id:\"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\" id:\"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\" pid:4782 exit_status:1 exited_at:{seconds:1776902984 nanos:844384154}" Apr 23 00:09:46.501318 containerd[1644]: time="2026-04-23T00:09:46.499279601Z" level=info msg="container event discarded" container=f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b type=CONTAINER_STARTED_EVENT Apr 23 00:09:47.108314 containerd[1644]: time="2026-04-23T00:09:47.096937655Z" level=info msg="container event discarded" container=304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f type=CONTAINER_STOPPED_EVENT Apr 23 00:09:48.081367 containerd[1644]: time="2026-04-23T00:09:48.078532078Z" level=info msg="container event discarded" container=2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a type=CONTAINER_STOPPED_EVENT Apr 23 00:09:48.311396 containerd[1644]: time="2026-04-23T00:09:48.304373706Z" level=info msg="container event discarded" container=f9b437b56b6419f11b44d26361d334a3579aaf245e3b71829abad3eb6f25ae0b type=CONTAINER_STOPPED_EVENT Apr 23 00:09:49.447380 kubelet[3117]: E0423 00:09:49.447173 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.837s" Apr 23 00:09:50.657040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab-rootfs.mount: Deactivated successfully. Apr 23 00:09:51.192172 containerd[1644]: time="2026-04-23T00:09:51.112273635Z" level=info msg="container event discarded" container=2ee2b6fd93d851c9e8bc51f3e51268677e2ab5b2b2e51fdd4dd4322ab736fa19 type=CONTAINER_DELETED_EVENT Apr 23 00:09:51.983683 kubelet[3117]: E0423 00:09:51.977190 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:53.020488 kubelet[3117]: I0423 00:09:53.014299 3117 scope.go:122] "RemoveContainer" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:09:53.241383 kubelet[3117]: E0423 00:09:53.221256 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:09:53.260328 containerd[1644]: time="2026-04-23T00:09:53.038256180Z" level=info msg="container event discarded" container=95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b type=CONTAINER_CREATED_EVENT Apr 23 00:09:53.445231 kubelet[3117]: E0423 00:09:53.444999 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.05s" Apr 23 00:09:54.346389 containerd[1644]: time="2026-04-23T00:09:54.344300640Z" level=info msg="RemoveContainer for \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\"" Apr 23 00:09:54.957487 containerd[1644]: time="2026-04-23T00:09:54.957163272Z" level=info msg="RemoveContainer for \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\" returns successfully" Apr 23 00:09:55.881114 kubelet[3117]: E0423 00:09:55.877440 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.432s" Apr 23 00:09:56.093001 kubelet[3117]: I0423 00:09:56.082026 3117 scope.go:122] "RemoveContainer" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:09:56.186210 containerd[1644]: time="2026-04-23T00:09:56.149040505Z" level=error msg="ContainerStatus for \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\": not found" Apr 23 00:09:56.368543 kubelet[3117]: E0423 00:09:56.167416 3117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\": not found" containerID="593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b" Apr 23 00:09:56.368543 kubelet[3117]: I0423 00:09:56.167540 3117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b"} err="failed to get container status \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\": not found" Apr 23 00:09:56.418326 kubelet[3117]: I0423 00:09:56.414721 3117 scope.go:122] "RemoveContainer" containerID="7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab" Apr 23 00:09:56.534200 kubelet[3117]: E0423 00:09:56.498258 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:56.534200 kubelet[3117]: E0423 00:09:56.502514 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:09:58.002153 containerd[1644]: time="2026-04-23T00:09:57.988088384Z" level=info msg="container event discarded" container=6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff type=CONTAINER_CREATED_EVENT Apr 23 00:09:59.209521 kubelet[3117]: E0423 00:09:59.167106 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.214s" Apr 23 00:09:59.599281 sshd[5220]: Connection closed by 10.0.0.1 port 33490 Apr 23 00:09:59.686263 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Apr 23 00:10:00.190342 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:33490.service: Deactivated successfully. Apr 23 00:10:00.223472 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:33490.service: Consumed 1.128s CPU time, 4.2M memory peak. Apr 23 00:10:00.562186 containerd[1644]: time="2026-04-23T00:10:00.486477498Z" level=info msg="container event discarded" container=95eb83c5c9aa25f92d3e6d21406df3b0726533deddf52164cb7fdcbda4c87d1b type=CONTAINER_STARTED_EVENT Apr 23 00:10:00.588480 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 00:10:00.619686 systemd[1]: session-16.scope: Consumed 7.945s CPU time, 15.8M memory peak. Apr 23 00:10:00.769335 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Apr 23 00:10:01.053173 systemd-logind[1614]: Removed session 16. Apr 23 00:10:02.025304 kubelet[3117]: E0423 00:10:02.018556 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.449s" Apr 23 00:10:03.391529 kubelet[3117]: E0423 00:10:03.385357 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.358s" Apr 23 00:10:03.708116 kubelet[3117]: E0423 00:10:03.398364 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:10:04.644229 kubelet[3117]: E0423 00:10:04.643411 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 23 00:10:05.400347 containerd[1644]: time="2026-04-23T00:10:05.377466428Z" level=info msg="container event discarded" container=593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b type=CONTAINER_CREATED_EVENT Apr 23 00:10:05.610491 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:35634.service - OpenSSH per-connection server daemon (10.0.0.1:35634). Apr 23 00:10:05.815359 containerd[1644]: time="2026-04-23T00:10:05.597386909Z" level=info msg="container event discarded" container=6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff type=CONTAINER_STARTED_EVENT Apr 23 00:10:07.696214 kubelet[3117]: E0423 00:10:07.692511 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.049s" Apr 23 00:10:12.364550 containerd[1644]: time="2026-04-23T00:10:12.359441402Z" level=info msg="container event discarded" container=593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b type=CONTAINER_STARTED_EVENT Apr 23 00:10:12.987134 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 35634 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:10:13.547474 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:10:14.707170 systemd-logind[1614]: New session 17 of user core. Apr 23 00:10:14.913490 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 00:10:15.185273 kubelet[3117]: E0423 00:10:15.184712 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:10:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:10:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:10:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:10:04Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.19:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 23 00:10:15.531516 kubelet[3117]: E0423 00:10:15.498846 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.683s" Apr 23 00:10:15.736422 kubelet[3117]: E0423 00:10:14.788430 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:10:19.171470 kubelet[3117]: I0423 00:10:19.164076 3117 scope.go:122] "RemoveContainer" containerID="7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab" Apr 23 00:10:19.402231 kubelet[3117]: E0423 00:10:19.401889 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:10:20.292712 kubelet[3117]: E0423 00:10:20.291327 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:10:21.113081 kubelet[3117]: E0423 00:10:21.106511 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.414s" Apr 23 00:10:24.568888 kubelet[3117]: E0423 00:10:24.510037 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.353s" Apr 23 00:10:25.657989 kubelet[3117]: E0423 00:10:25.654205 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:10:25.791141 kubelet[3117]: E0423 00:10:25.789349 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:10:25.982374 kubelet[3117]: I0423 00:10:25.975024 3117 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 23 00:10:26.079415 kubelet[3117]: E0423 00:10:25.909119 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:10:26.811257 kubelet[3117]: E0423 00:10:26.808689 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.154s" Apr 23 00:10:29.976354 kubelet[3117]: E0423 00:10:29.972256 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.022s" Apr 23 00:10:31.509439 kubelet[3117]: E0423 00:10:31.506757 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.506s" Apr 23 00:10:32.609488 kubelet[3117]: E0423 00:10:32.607697 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:10:33.905360 kubelet[3117]: E0423 00:10:33.904011 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.297s" Apr 23 00:10:35.811394 kubelet[3117]: E0423 00:10:35.807498 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.234s" Apr 23 00:10:36.252443 kubelet[3117]: E0423 00:10:36.246439 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:10:36.252443 kubelet[3117]: E0423 00:10:36.189544 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 23 00:10:40.576265 kubelet[3117]: E0423 00:10:40.574111 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.862s" Apr 23 00:10:41.487493 kubelet[3117]: E0423 00:10:41.487010 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:10:41.851479 sshd[5360]: Connection closed by 10.0.0.1 port 35634 Apr 23 00:10:41.875263 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Apr 23 00:10:42.474008 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:35634.service: Deactivated successfully. Apr 23 00:10:42.591172 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:35634.service: Consumed 2.454s CPU time, 4M memory peak. Apr 23 00:10:42.914378 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 00:10:42.951205 systemd[1]: session-17.scope: Consumed 9.498s CPU time, 16.2M memory peak. Apr 23 00:10:43.061963 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Apr 23 00:10:43.087698 systemd-logind[1614]: Removed session 17. Apr 23 00:10:43.710143 kubelet[3117]: E0423 00:10:43.690541 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.117s" Apr 23 00:10:46.394789 kubelet[3117]: E0423 00:10:46.390979 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:10:46.599271 kubelet[3117]: E0423 00:10:46.556943 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 23 00:10:47.540431 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:56202.service - OpenSSH per-connection server daemon (10.0.0.1:56202). Apr 23 00:10:51.712394 kubelet[3117]: E0423 00:10:51.706447 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.014s" Apr 23 00:10:53.052038 kubelet[3117]: E0423 00:10:53.051261 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.27s" Apr 23 00:10:54.063103 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 56202 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:10:54.151393 kubelet[3117]: E0423 00:10:54.065252 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:10:54.502359 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:10:55.909414 systemd-logind[1614]: New session 18 of user core. Apr 23 00:10:56.251366 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 00:10:56.485475 kubelet[3117]: E0423 00:10:56.478535 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:10:56.599473 kubelet[3117]: E0423 00:10:56.594019 3117 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 23 00:10:57.206506 kubelet[3117]: E0423 00:10:57.107531 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 23 00:11:05.088895 kubelet[3117]: E0423 00:11:05.085273 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.491s" Apr 23 00:11:06.043226 kubelet[3117]: E0423 00:11:06.042119 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:06.417298 kubelet[3117]: E0423 00:11:06.375529 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:07.190269 kubelet[3117]: E0423 00:11:07.189429 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.004s" Apr 23 00:11:08.943273 kubelet[3117]: E0423 00:11:08.935554 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 23 00:11:10.631026 kubelet[3117]: E0423 00:11:10.624241 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.434s" Apr 23 00:11:12.167302 kubelet[3117]: E0423 00:11:12.165978 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.536s" Apr 23 00:11:16.643509 kubelet[3117]: I0423 00:11:16.643149 3117 scope.go:122] "RemoveContainer" containerID="7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab" Apr 23 00:11:16.650751 kubelet[3117]: E0423 00:11:16.650165 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:17.113096 containerd[1644]: time="2026-04-23T00:11:17.109438686Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}" Apr 23 00:11:17.973280 containerd[1644]: time="2026-04-23T00:11:17.970756044Z" level=info msg="Container 2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:11:18.365437 containerd[1644]: time="2026-04-23T00:11:18.364943360Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d\"" Apr 23 00:11:18.692201 containerd[1644]: time="2026-04-23T00:11:18.674463935Z" level=info msg="StartContainer for \"2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d\"" Apr 23 00:11:18.854098 containerd[1644]: time="2026-04-23T00:11:18.850794138Z" level=info msg="connecting to shim 2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 23 00:11:19.928941 kubelet[3117]: E0423 00:11:19.926547 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:09Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.19:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:11:21.481429 systemd[1]: Started cri-containerd-2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d.scope - libcontainer container 2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d. Apr 23 00:11:22.138427 sshd[5467]: Connection closed by 10.0.0.1 port 56202 Apr 23 00:11:22.163186 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Apr 23 00:11:22.186179 kubelet[3117]: E0423 00:11:21.972524 3117 status_manager.go:1068] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08643754-64bc-4a94-a41b-b8af580dc571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-23T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-23T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"200m\\\"},\\\"containerID\\\":\\\"containerd://7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\\\",\\\"image\\\":\\\"registry.k8s.io/kube-controller-manager:v1.35.4\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-23T00:06:41Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-23T00:05:12Z\\\"}},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"200m\\\"}},\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-23T00:09:44Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-23T00:07:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-controller-manager-localhost\": Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:11:22.528328 kubelet[3117]: E0423 00:11:22.486493 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.823s" Apr 23 00:11:22.701831 kubelet[3117]: E0423 00:11:22.687360 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Apr 23 00:11:22.795373 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:56202.service: Deactivated successfully. Apr 23 00:11:22.834472 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:56202.service: Consumed 2.353s CPU time, 4M memory peak. Apr 23 00:11:22.954911 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 00:11:22.969507 systemd[1]: session-18.scope: Consumed 11.198s CPU time, 15.9M memory peak. Apr 23 00:11:23.064544 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Apr 23 00:11:23.544403 systemd-logind[1614]: Removed session 18. Apr 23 00:11:24.679962 containerd[1644]: time="2026-04-23T00:11:24.678319226Z" level=info msg="StopContainer for \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" with timeout 30 (s)" Apr 23 00:11:24.804512 containerd[1644]: time="2026-04-23T00:11:24.802664161Z" level=info msg="Stop container \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" with signal terminated" Apr 23 00:11:25.355313 containerd[1644]: time="2026-04-23T00:11:25.352397533Z" level=error msg="get state for 2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" error="context deadline exceeded" Apr 23 00:11:25.393955 containerd[1644]: time="2026-04-23T00:11:25.384359971Z" level=warning msg="unknown status" status=0 Apr 23 00:11:25.793495 kubelet[3117]: E0423 00:11:25.752523 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.175s" Apr 23 00:11:26.613158 containerd[1644]: time="2026-04-23T00:11:26.604380250Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:11:27.786501 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:49740.service - OpenSSH per-connection server daemon (10.0.0.1:49740). Apr 23 00:11:29.903440 kubelet[3117]: E0423 00:11:29.894406 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.319s" Apr 23 00:11:30.780005 containerd[1644]: time="2026-04-23T00:11:30.778473583Z" level=info msg="StartContainer for \"2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d\" returns successfully" Apr 23 00:11:30.906433 kubelet[3117]: E0423 00:11:30.905185 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:11:33.642115 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 49740 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:11:33.880374 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:11:34.066502 kubelet[3117]: E0423 00:11:34.042385 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.851s" Apr 23 00:11:34.608264 systemd-logind[1614]: New session 19 of user core. Apr 23 00:11:34.709465 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 00:11:36.063910 kubelet[3117]: E0423 00:11:36.056834 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 23 00:11:36.288337 kubelet[3117]: E0423 00:11:36.284817 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.107s" Apr 23 00:11:36.661368 kubelet[3117]: E0423 00:11:36.660546 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:38.428953 kubelet[3117]: E0423 00:11:38.428491 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.183s" Apr 23 00:11:40.115138 kubelet[3117]: E0423 00:11:40.113249 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 23 00:11:40.950287 kubelet[3117]: E0423 00:11:40.948819 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:11:41.680016 containerd[1644]: time="2026-04-23T00:11:41.677856389Z" level=info msg="container event discarded" container=f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240 type=CONTAINER_CREATED_EVENT Apr 23 00:11:41.694526 containerd[1644]: time="2026-04-23T00:11:41.682932333Z" level=info msg="container event discarded" container=f83c8ee3bbf97811e9c93df4bf28fb8568026b1a78603c999142716d9f939240 type=CONTAINER_STARTED_EVENT Apr 23 00:11:42.041452 containerd[1644]: time="2026-04-23T00:11:42.013681277Z" level=info msg="container event discarded" container=0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc type=CONTAINER_CREATED_EVENT Apr 23 00:11:42.041452 containerd[1644]: time="2026-04-23T00:11:42.013913909Z" level=info msg="container event discarded" container=0d0a0a70c7422b2cef42b9e004a15e1a1059fe1718ea8a33425318ae2adec1bc type=CONTAINER_STARTED_EVENT Apr 23 00:11:42.130106 kubelet[3117]: E0423 00:11:42.126852 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.545s" Apr 23 00:11:42.401851 containerd[1644]: time="2026-04-23T00:11:42.401172050Z" level=info msg="container event discarded" container=6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff type=CONTAINER_STOPPED_EVENT Apr 23 00:11:43.294463 containerd[1644]: time="2026-04-23T00:11:43.279352426Z" level=info msg="container event discarded" container=287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576 type=CONTAINER_CREATED_EVENT Apr 23 00:11:44.107428 containerd[1644]: time="2026-04-23T00:11:44.100225377Z" level=info msg="container event discarded" container=2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a type=CONTAINER_CREATED_EVENT Apr 23 00:11:44.344374 containerd[1644]: time="2026-04-23T00:11:44.341380553Z" level=info msg="container event discarded" container=593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b type=CONTAINER_STOPPED_EVENT Apr 23 00:11:44.465395 containerd[1644]: time="2026-04-23T00:11:44.403213893Z" level=info msg="container event discarded" container=2d4d175647079f0b990d8bcaaae7aeded36c6ab8cb7b30e99de65b5f0f325c8a type=CONTAINER_DELETED_EVENT Apr 23 00:11:44.690090 containerd[1644]: time="2026-04-23T00:11:44.685535073Z" level=info msg="container event discarded" container=287399b6d325d1cabc8212367f7119fd92077bfd83d079645ffb7130eb072576 type=CONTAINER_STARTED_EVENT Apr 23 00:11:44.877431 containerd[1644]: time="2026-04-23T00:11:44.844111723Z" level=info msg="container event discarded" container=2714570d4082d5d485c063c52cbfdfa66c9ab0f3d172683b4bed51f54bfa5d8a type=CONTAINER_STARTED_EVENT Apr 23 00:11:45.264510 kubelet[3117]: E0423 00:11:45.250461 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.679s" Apr 23 00:11:45.972455 containerd[1644]: time="2026-04-23T00:11:45.970849180Z" level=info msg="container event discarded" container=304f26931eb8c93d05b58d19283ef9089240982ffc8ef2b1d61dec61bf10ae5f type=CONTAINER_DELETED_EVENT Apr 23 00:11:46.666329 kubelet[3117]: E0423 00:11:46.661037 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.402s" Apr 23 00:11:47.077050 kubelet[3117]: E0423 00:11:47.075246 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:52.895539 sshd[5614]: Connection closed by 10.0.0.1 port 49740 Apr 23 00:11:52.901373 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Apr 23 00:11:53.389305 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:49740.service: Deactivated successfully. Apr 23 00:11:53.415166 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:49740.service: Consumed 1.653s CPU time, 4M memory peak. Apr 23 00:11:53.663497 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 00:11:53.754190 systemd[1]: session-19.scope: Consumed 5.217s CPU time, 16.1M memory peak. Apr 23 00:11:53.932846 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Apr 23 00:11:54.179858 systemd-logind[1614]: Removed session 19. Apr 23 00:11:54.414202 kubelet[3117]: E0423 00:11:54.412830 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.866s" Apr 23 00:11:56.033070 containerd[1644]: time="2026-04-23T00:11:56.032382245Z" level=info msg="Kill container \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\"" Apr 23 00:11:57.393543 systemd[1]: cri-containerd-18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb.scope: Deactivated successfully. Apr 23 00:11:57.479199 systemd[1]: cri-containerd-18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb.scope: Consumed 20min 43.489s CPU time, 213.2M memory peak. Apr 23 00:11:57.706083 containerd[1644]: time="2026-04-23T00:11:57.697092033Z" level=info msg="received container exit event container_id:\"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" id:\"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" pid:2827 exit_status:137 exited_at:{seconds:1776903117 nanos:576793491}" Apr 23 00:11:58.024419 kubelet[3117]: E0423 00:11:58.000100 3117 status_manager.go:1068] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08643754-64bc-4a94-a41b-b8af580dc571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-23T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-23T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"200m\\\"},\\\"containerID\\\":\\\"containerd://7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\\\",\\\"image\\\":\\\"registry.k8s.io/kube-controller-manager:v1.35.4\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-23T00:09:44Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-23T00:07:45Z\\\"}},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"200m\\\"}},\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-controller-manager-localhost\": Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost/status\": unexpected EOF" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:11:58.051211 kubelet[3117]: E0423 00:11:58.050925 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:11:58.078050 kubelet[3117]: E0423 00:11:58.050982 3117 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-6b57s.18a8d3bea7fe3b73\": unexpected EOF" event="&Event{ObjectMeta:{coredns-7d764666f9-6b57s.18a8d3bea7fe3b73 kube-system 1185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-6b57s,UID:ac974d3d-8139-4155-8d41-38e6c88e34b1,APIVersion:v1,ResourceVersion:969,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-23 00:07:48 +0000 UTC,LastTimestamp:2026-04-23 00:10:18.117420106 +0000 UTC m=+718.315252746,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 23 00:11:58.103461 kubelet[3117]: E0423 00:11:58.069908 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:11:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.19:6443/api/v1/nodes/localhost/status?timeout=10s\": unexpected EOF" Apr 23 00:11:58.190369 kubelet[3117]: E0423 00:11:58.167067 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:58.190369 kubelet[3117]: E0423 00:11:58.070382 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:11:58.190369 kubelet[3117]: E0423 00:11:58.176777 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:11:58.190369 kubelet[3117]: E0423 00:11:58.176834 3117 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 23 00:11:58.195925 kubelet[3117]: E0423 00:11:58.190021 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:11:58.400705 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:37084.service - OpenSSH per-connection server daemon (10.0.0.1:37084). Apr 23 00:11:58.541007 kubelet[3117]: E0423 00:11:58.539795 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:11:58.592191 kubelet[3117]: E0423 00:11:58.591280 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:11:58.792243 kubelet[3117]: E0423 00:11:58.788099 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:11:58.890386 kubelet[3117]: E0423 00:11:58.884848 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:58.900100 kubelet[3117]: E0423 00:11:58.899400 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:11:59.621215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb-rootfs.mount: Deactivated successfully. Apr 23 00:11:59.638411 kubelet[3117]: E0423 00:11:59.637184 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.074s" Apr 23 00:11:59.759343 kubelet[3117]: E0423 00:11:59.752549 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:11:59.867324 containerd[1644]: time="2026-04-23T00:11:59.867107127Z" level=info msg="StopContainer for \"18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb\" returns successfully" Apr 23 00:11:59.954673 kubelet[3117]: E0423 00:11:59.876278 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:00.095254 containerd[1644]: time="2026-04-23T00:12:00.091834677Z" level=info msg="CreateContainer within sandbox \"ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 23 00:12:00.273030 kubelet[3117]: E0423 00:12:00.269524 3117 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-6b57s.18a8d3bea7fe3b73\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-6b57s.18a8d3bea7fe3b73 kube-system 1185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-6b57s,UID:ac974d3d-8139-4155-8d41-38e6c88e34b1,APIVersion:v1,ResourceVersion:969,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-23 00:07:48 +0000 UTC,LastTimestamp:2026-04-23 00:10:18.117420106 +0000 UTC m=+718.315252746,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 23 00:12:00.314484 containerd[1644]: time="2026-04-23T00:12:00.310003209Z" level=info msg="Container fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:12:00.502908 containerd[1644]: time="2026-04-23T00:12:00.502326006Z" level=info msg="CreateContainer within sandbox \"ebae48b4ff064f80d8f04b1b1d03180eaa106fa5a6237df5d9f742b3a5bb6d22\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd\"" Apr 23 00:12:00.586278 containerd[1644]: time="2026-04-23T00:12:00.575430621Z" level=info msg="StartContainer for \"fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd\"" Apr 23 00:12:00.586124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262473323.mount: Deactivated successfully. Apr 23 00:12:00.694140 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:00.736258 containerd[1644]: time="2026-04-23T00:12:00.735164972Z" level=info msg="connecting to shim fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd" address="unix:///run/containerd/s/d35460d04c7a65f745c2a7f60ab15985a784d7e07e95ba5b2ca4579b97f30e0a" protocol=ttrpc version=3 Apr 23 00:12:00.773157 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:01.088052 systemd-logind[1614]: New session 20 of user core. Apr 23 00:12:01.106467 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 00:12:01.364083 systemd[1]: Started cri-containerd-fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd.scope - libcontainer container fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd. Apr 23 00:12:03.353834 containerd[1644]: time="2026-04-23T00:12:03.353337195Z" level=info msg="StartContainer for \"fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd\" returns successfully" Apr 23 00:12:04.066884 containerd[1644]: time="2026-04-23T00:12:04.064187929Z" level=info msg="container event discarded" container=e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd type=CONTAINER_CREATED_EVENT Apr 23 00:12:04.477856 kubelet[3117]: E0423 00:12:04.475816 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:04.549486 kubelet[3117]: E0423 00:12:04.468249 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:12:04.665199 kubelet[3117]: E0423 00:12:04.651369 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:12:05.439537 kubelet[3117]: E0423 00:12:05.435351 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:12:05.444874 kubelet[3117]: E0423 00:12:05.444745 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:12:05.558915 kubelet[3117]: E0423 00:12:05.552816 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:12:05.612372 kubelet[3117]: E0423 00:12:05.611516 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:12:05.622940 kubelet[3117]: E0423 00:12:05.621285 3117 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" Apr 23 00:12:05.622940 kubelet[3117]: I0423 00:12:05.622851 3117 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 23 00:12:05.636529 kubelet[3117]: E0423 00:12:05.623287 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Apr 23 00:12:05.640466 kubelet[3117]: E0423 00:12:05.638357 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:05.849252 kubelet[3117]: E0423 00:12:05.844184 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Apr 23 00:12:05.861312 containerd[1644]: time="2026-04-23T00:12:05.858255190Z" level=info msg="container event discarded" container=e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd type=CONTAINER_STARTED_EVENT Apr 23 00:12:06.291035 kubelet[3117]: E0423 00:12:06.255284 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:12:06.326720 kubelet[3117]: E0423 00:12:06.325396 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Apr 23 00:12:06.337237 kubelet[3117]: E0423 00:12:06.325999 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.19:6443: connect: connection refused" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:12:06.812134 kubelet[3117]: E0423 00:12:06.810199 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:07.790812 kubelet[3117]: E0423 00:12:07.790236 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:17.185873 sshd[5736]: Connection closed by 10.0.0.1 port 37084 Apr 23 00:12:17.213223 kubelet[3117]: E0423 00:12:17.186398 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 23 00:12:17.211170 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Apr 23 00:12:17.356175 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:37084.service: Deactivated successfully. Apr 23 00:12:17.438126 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 00:12:17.438694 systemd[1]: session-20.scope: Consumed 4.132s CPU time, 16M memory peak. Apr 23 00:12:17.514453 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Apr 23 00:12:17.614635 systemd-logind[1614]: Removed session 20. Apr 23 00:12:18.614630 kubelet[3117]: E0423 00:12:18.614245 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:12:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:12:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:12:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:12:08Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.19:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 23 00:12:18.789082 kubelet[3117]: E0423 00:12:18.788246 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:12:19.650638 kubelet[3117]: E0423 00:12:19.650297 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:20.326091 kubelet[3117]: E0423 00:12:20.318967 3117 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-6b57s.18a8d3bea7fe3b73\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-6b57s.18a8d3bea7fe3b73 kube-system 1185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-6b57s,UID:ac974d3d-8139-4155-8d41-38e6c88e34b1,APIVersion:v1,ResourceVersion:969,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-23 00:07:48 +0000 UTC,LastTimestamp:2026-04-23 00:10:18.117420106 +0000 UTC m=+718.315252746,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 23 00:12:22.662906 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:55358.service - OpenSSH per-connection server daemon (10.0.0.1:55358). Apr 23 00:12:25.170313 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 55358 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:25.645521 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:26.286214 systemd-logind[1614]: New session 21 of user core. Apr 23 00:12:26.336183 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 23 00:12:28.669479 kubelet[3117]: E0423 00:12:28.655531 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:12:29.194097 kubelet[3117]: E0423 00:12:29.191483 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:12:29.286670 kubelet[3117]: E0423 00:12:29.235523 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="3.2s" Apr 23 00:12:30.786976 kubelet[3117]: E0423 00:12:30.784259 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.233s" Apr 23 00:12:36.649130 kubelet[3117]: E0423 00:12:36.648047 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:38.267445 containerd[1644]: time="2026-04-23T00:12:38.263919781Z" level=info msg="container event discarded" container=7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab type=CONTAINER_CREATED_EVENT Apr 23 00:12:38.734541 kubelet[3117]: E0423 00:12:38.721440 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 23 00:12:39.561274 kubelet[3117]: E0423 00:12:39.560514 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:12:40.814270 kubelet[3117]: E0423 00:12:40.694177 3117 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-6b57s.18a8d3bea7fe3b73\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-6b57s.18a8d3bea7fe3b73 kube-system 1185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-6b57s,UID:ac974d3d-8139-4155-8d41-38e6c88e34b1,APIVersion:v1,ResourceVersion:969,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-23 00:07:48 +0000 UTC,LastTimestamp:2026-04-23 00:10:18.117420106 +0000 UTC m=+718.315252746,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 23 00:12:42.486240 kubelet[3117]: E0423 00:12:42.485113 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 23 00:12:44.171993 sshd[5864]: Connection closed by 10.0.0.1 port 55358 Apr 23 00:12:44.176740 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Apr 23 00:12:44.616537 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:55358.service: Deactivated successfully. Apr 23 00:12:44.655793 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:55358.service: Consumed 1.227s CPU time, 4.2M memory peak. Apr 23 00:12:44.800320 systemd[1]: session-21.scope: Deactivated successfully. Apr 23 00:12:44.876912 systemd[1]: session-21.scope: Consumed 5.511s CPU time, 18.1M memory peak. Apr 23 00:12:45.033933 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Apr 23 00:12:45.247252 systemd-logind[1614]: Removed session 21. Apr 23 00:12:45.445786 containerd[1644]: time="2026-04-23T00:12:45.444416932Z" level=info msg="container event discarded" container=7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab type=CONTAINER_STARTED_EVENT Apr 23 00:12:46.086579 kubelet[3117]: E0423 00:12:46.072546 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.491s" Apr 23 00:12:48.747026 kubelet[3117]: E0423 00:12:48.745538 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:12:49.535441 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:55964.service - OpenSSH per-connection server daemon (10.0.0.1:55964). Apr 23 00:12:49.714153 kubelet[3117]: E0423 00:12:49.713236 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:12:50.654501 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 55964 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:50.660900 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:51.051116 systemd-logind[1614]: New session 22 of user core. Apr 23 00:12:51.239112 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 23 00:12:54.562214 kubelet[3117]: E0423 00:12:54.560801 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:58.763660 kubelet[3117]: E0423 00:12:58.762766 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.19:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:12:58.763660 kubelet[3117]: E0423 00:12:58.762933 3117 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 23 00:12:58.940155 kubelet[3117]: E0423 00:12:58.937713 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 23 00:12:59.747929 kubelet[3117]: E0423 00:12:59.746632 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:13:00.853132 kubelet[3117]: E0423 00:13:00.851981 3117 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-6b57s.18a8d3bea7fe3b73\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-6b57s.18a8d3bea7fe3b73 kube-system 1185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-6b57s,UID:ac974d3d-8139-4155-8d41-38e6c88e34b1,APIVersion:v1,ResourceVersion:969,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-23 00:07:48 +0000 UTC,LastTimestamp:2026-04-23 00:10:18.117420106 +0000 UTC m=+718.315252746,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 23 00:13:04.004256 sshd[5958]: Connection closed by 10.0.0.1 port 55964 Apr 23 00:13:04.006758 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:04.038889 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:55964.service: Deactivated successfully. Apr 23 00:13:04.041845 systemd[1]: session-22.scope: Deactivated successfully. Apr 23 00:13:04.042153 systemd[1]: session-22.scope: Consumed 1.969s CPU time, 14.6M memory peak. Apr 23 00:13:04.042888 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Apr 23 00:13:04.044359 systemd-logind[1614]: Removed session 22. Apr 23 00:13:04.812768 containerd[1644]: time="2026-04-23T00:13:04.811976259Z" level=info msg="container event discarded" container=e3f2e90c95c3093882845bfc73d92b9a22d9da7efe4dd2f62f42217944939ffd type=CONTAINER_STOPPED_EVENT Apr 23 00:13:06.710254 kubelet[3117]: E0423 00:13:06.709391 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:09.272119 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:41018.service - OpenSSH per-connection server daemon (10.0.0.1:41018). Apr 23 00:13:09.838628 kubelet[3117]: E0423 00:13:09.834026 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:13:11.357195 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 41018 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:11.408268 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:11.642654 systemd-logind[1614]: New session 23 of user core. Apr 23 00:13:11.702915 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 23 00:13:12.807996 containerd[1644]: time="2026-04-23T00:13:12.806238911Z" level=info msg="container event discarded" container=6921f5663b35bd6080135217583f646e20a32e243a09fc6f9e76b1df5acf96ff type=CONTAINER_DELETED_EVENT Apr 23 00:13:15.967839 kubelet[3117]: E0423 00:13:15.967474 3117 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 23 00:13:16.552796 kubelet[3117]: E0423 00:13:16.552340 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:19.593120 kubelet[3117]: E0423 00:13:19.592787 3117 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:13:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:13:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:13:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:13:09Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.19:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 23 00:13:20.148382 kubelet[3117]: E0423 00:13:20.147174 3117 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="14bc29ec35edba17af38052ec24275f2" pod="kube-system/kube-controller-manager-localhost" Apr 23 00:13:23.050973 kubelet[3117]: E0423 00:13:23.014149 3117 reflector.go:204] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object - error from a previous attempt: net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 23 00:13:23.377609 kubelet[3117]: E0423 00:13:23.377141 3117 reflector.go:204] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object - error from a previous attempt: net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 23 00:13:23.391518 kubelet[3117]: E0423 00:13:23.390738 3117 reflector.go:204] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object - error from a previous attempt: net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 23 00:13:23.391518 kubelet[3117]: E0423 00:13:23.390821 3117 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="0c67841a71302de5212118cd86fd71ba" pod="kube-system/kube-apiserver-localhost" Apr 23 00:13:23.494907 sshd[6052]: Connection closed by 10.0.0.1 port 41018 Apr 23 00:13:23.501819 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:23.714436 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:41018.service: Deactivated successfully. Apr 23 00:13:23.758828 systemd[1]: session-23.scope: Deactivated successfully. Apr 23 00:13:23.759598 systemd[1]: session-23.scope: Consumed 2.926s CPU time, 16.1M memory peak. Apr 23 00:13:23.762878 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Apr 23 00:13:23.816972 systemd-logind[1614]: Removed session 23. Apr 23 00:13:23.952645 kubelet[3117]: E0423 00:13:23.951978 3117 reflector.go:204] "Failed to watch" err="configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object - error from a previous attempt: net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 23 00:13:23.984444 kubelet[3117]: E0423 00:13:23.984195 3117 reflector.go:204] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object - error from a previous attempt: net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 23 00:13:25.553429 kubelet[3117]: E0423 00:13:25.553284 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:27.553874 kubelet[3117]: E0423 00:13:27.553697 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:28.573841 kubelet[3117]: E0423 00:13:28.573146 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:28.633904 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:40236.service - OpenSSH per-connection server daemon (10.0.0.1:40236). Apr 23 00:13:29.202171 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 40236 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:29.210536 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:29.665508 systemd-logind[1614]: New session 24 of user core. Apr 23 00:13:29.707852 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 23 00:13:31.973049 sshd[6138]: Connection closed by 10.0.0.1 port 40236 Apr 23 00:13:32.015545 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:32.149740 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:40236.service: Deactivated successfully. Apr 23 00:13:32.361284 systemd[1]: session-24.scope: Deactivated successfully. Apr 23 00:13:32.380268 systemd[1]: session-24.scope: Consumed 1.778s CPU time, 17.1M memory peak. Apr 23 00:13:32.509898 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Apr 23 00:13:32.558921 systemd-logind[1614]: Removed session 24. Apr 23 00:13:36.844901 systemd[1]: cri-containerd-2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d.scope: Deactivated successfully. Apr 23 00:13:36.847234 systemd[1]: cri-containerd-2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d.scope: Consumed 12.814s CPU time, 20M memory peak. Apr 23 00:13:36.937403 containerd[1644]: time="2026-04-23T00:13:36.936316733Z" level=info msg="received container exit event container_id:\"2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d\" id:\"2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d\" pid:5553 exit_status:1 exited_at:{seconds:1776903216 nanos:878342162}" Apr 23 00:13:37.318654 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:59232.service - OpenSSH per-connection server daemon (10.0.0.1:59232). Apr 23 00:13:39.184759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d-rootfs.mount: Deactivated successfully. Apr 23 00:13:39.293201 containerd[1644]: time="2026-04-23T00:13:39.291406357Z" level=error msg="collecting metrics for 2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" error="ttrpc: closed" Apr 23 00:13:39.559373 sshd[6185]: Accepted publickey for core from 10.0.0.1 port 59232 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:39.598121 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:39.796958 systemd-logind[1614]: New session 25 of user core. Apr 23 00:13:39.907381 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 23 00:13:40.713952 kubelet[3117]: I0423 00:13:40.713300 3117 scope.go:122] "RemoveContainer" containerID="7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab" Apr 23 00:13:40.870014 kubelet[3117]: I0423 00:13:40.862551 3117 scope.go:122] "RemoveContainer" containerID="2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" Apr 23 00:13:40.892228 kubelet[3117]: E0423 00:13:40.891807 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:40.909462 kubelet[3117]: E0423 00:13:40.904305 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:13:41.044554 containerd[1644]: time="2026-04-23T00:13:41.039897842Z" level=info msg="RemoveContainer for \"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\"" Apr 23 00:13:41.149427 containerd[1644]: time="2026-04-23T00:13:41.148897493Z" level=info msg="RemoveContainer for \"7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab\" returns successfully" Apr 23 00:13:41.875071 sshd[6213]: Connection closed by 10.0.0.1 port 59232 Apr 23 00:13:41.889416 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:41.903484 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:59232.service: Deactivated successfully. Apr 23 00:13:41.933479 systemd[1]: session-25.scope: Deactivated successfully. Apr 23 00:13:41.939622 systemd[1]: session-25.scope: Consumed 1.413s CPU time, 16.4M memory peak. Apr 23 00:13:41.951229 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Apr 23 00:13:41.961749 systemd-logind[1614]: Removed session 25. Apr 23 00:13:45.891691 kubelet[3117]: I0423 00:13:45.888548 3117 scope.go:122] "RemoveContainer" containerID="2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" Apr 23 00:13:45.980834 kubelet[3117]: E0423 00:13:45.967064 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:45.985224 kubelet[3117]: E0423 00:13:45.983983 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:13:46.549925 kubelet[3117]: E0423 00:13:46.549644 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:47.132198 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:36074.service - OpenSSH per-connection server daemon (10.0.0.1:36074). Apr 23 00:13:47.133278 kubelet[3117]: E0423 00:13:47.133025 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:47.428658 sshd[6248]: Accepted publickey for core from 10.0.0.1 port 36074 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:47.437624 sshd-session[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:47.483363 systemd-logind[1614]: New session 26 of user core. Apr 23 00:13:47.507193 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 23 00:13:48.072549 sshd[6254]: Connection closed by 10.0.0.1 port 36074 Apr 23 00:13:48.078916 sshd-session[6248]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:48.089724 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:36074.service: Deactivated successfully. Apr 23 00:13:48.092221 systemd[1]: session-26.scope: Deactivated successfully. Apr 23 00:13:48.093414 systemd-logind[1614]: Session 26 logged out. Waiting for processes to exit. Apr 23 00:13:48.094442 systemd-logind[1614]: Removed session 26. Apr 23 00:13:50.637664 kubelet[3117]: E0423 00:13:50.636740 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:53.257953 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:36082.service - OpenSSH per-connection server daemon (10.0.0.1:36082). Apr 23 00:13:53.635039 containerd[1644]: time="2026-04-23T00:13:53.630530256Z" level=info msg="container event discarded" container=c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123 type=CONTAINER_CREATED_EVENT Apr 23 00:13:53.767930 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 36082 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:53.786791 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:53.934712 systemd-logind[1614]: New session 27 of user core. Apr 23 00:13:53.955519 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 23 00:13:54.673849 sshd[6309]: Connection closed by 10.0.0.1 port 36082 Apr 23 00:13:54.674363 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:54.776025 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:36082.service: Deactivated successfully. Apr 23 00:13:54.802802 systemd[1]: session-27.scope: Deactivated successfully. Apr 23 00:13:54.810633 systemd-logind[1614]: Session 27 logged out. Waiting for processes to exit. Apr 23 00:13:54.832522 systemd-logind[1614]: Removed session 27. Apr 23 00:13:59.984991 systemd[1]: Started sshd@26-10.0.0.19:22-10.0.0.1:56848.service - OpenSSH per-connection server daemon (10.0.0.1:56848). Apr 23 00:14:00.378692 sshd[6349]: Accepted publickey for core from 10.0.0.1 port 56848 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:00.381450 sshd-session[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:00.537807 systemd-logind[1614]: New session 28 of user core. Apr 23 00:14:00.561427 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 23 00:14:01.906821 containerd[1644]: time="2026-04-23T00:14:01.906500010Z" level=info msg="container event discarded" container=c9a9c0dd0f7f4ab0f5cbb835c3da1b987298bbdf2eb81b25d9b007a8bc6b8123 type=CONTAINER_STARTED_EVENT Apr 23 00:14:03.264224 sshd[6353]: Connection closed by 10.0.0.1 port 56848 Apr 23 00:14:03.267414 sshd-session[6349]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:03.332369 systemd[1]: sshd@26-10.0.0.19:22-10.0.0.1:56848.service: Deactivated successfully. Apr 23 00:14:03.339018 systemd[1]: session-28.scope: Deactivated successfully. Apr 23 00:14:03.339458 systemd[1]: session-28.scope: Consumed 2.314s CPU time, 15.9M memory peak. Apr 23 00:14:03.343373 systemd-logind[1614]: Session 28 logged out. Waiting for processes to exit. Apr 23 00:14:03.345427 systemd-logind[1614]: Removed session 28. Apr 23 00:14:08.570268 systemd[1]: Started sshd@27-10.0.0.19:22-10.0.0.1:43370.service - OpenSSH per-connection server daemon (10.0.0.1:43370). Apr 23 00:14:10.254296 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 43370 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:10.289081 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:10.809538 systemd-logind[1614]: New session 29 of user core. Apr 23 00:14:10.863401 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 23 00:14:12.569072 sshd[6417]: Connection closed by 10.0.0.1 port 43370 Apr 23 00:14:12.571997 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:12.606546 systemd[1]: sshd@27-10.0.0.19:22-10.0.0.1:43370.service: Deactivated successfully. Apr 23 00:14:12.631286 systemd[1]: session-29.scope: Deactivated successfully. Apr 23 00:14:12.636546 systemd[1]: session-29.scope: Consumed 1.434s CPU time, 16.1M memory peak. Apr 23 00:14:12.646453 systemd-logind[1614]: Session 29 logged out. Waiting for processes to exit. Apr 23 00:14:12.648098 systemd-logind[1614]: Removed session 29. Apr 23 00:14:17.759969 systemd[1]: Started sshd@28-10.0.0.19:22-10.0.0.1:57002.service - OpenSSH per-connection server daemon (10.0.0.1:57002). Apr 23 00:14:18.213053 sshd[6450]: Accepted publickey for core from 10.0.0.1 port 57002 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:18.228212 sshd-session[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:18.254825 systemd-logind[1614]: New session 30 of user core. Apr 23 00:14:18.273229 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 23 00:14:18.602536 kubelet[3117]: E0423 00:14:18.590436 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:22.026703 sshd[6454]: Connection closed by 10.0.0.1 port 57002 Apr 23 00:14:22.027952 sshd-session[6450]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:22.083304 systemd[1]: sshd@28-10.0.0.19:22-10.0.0.1:57002.service: Deactivated successfully. Apr 23 00:14:22.188230 systemd[1]: session-30.scope: Deactivated successfully. Apr 23 00:14:22.188734 systemd[1]: session-30.scope: Consumed 2.673s CPU time, 16.2M memory peak. Apr 23 00:14:22.194999 systemd-logind[1614]: Session 30 logged out. Waiting for processes to exit. Apr 23 00:14:22.240734 systemd-logind[1614]: Removed session 30. Apr 23 00:14:27.259261 systemd[1]: Started sshd@29-10.0.0.19:22-10.0.0.1:49732.service - OpenSSH per-connection server daemon (10.0.0.1:49732). Apr 23 00:14:28.013528 sshd[6508]: Accepted publickey for core from 10.0.0.1 port 49732 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:28.096063 sshd-session[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:28.383386 systemd-logind[1614]: New session 31 of user core. Apr 23 00:14:28.470778 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 23 00:14:30.271983 sshd[6512]: Connection closed by 10.0.0.1 port 49732 Apr 23 00:14:30.284517 sshd-session[6508]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:30.339897 systemd[1]: sshd@29-10.0.0.19:22-10.0.0.1:49732.service: Deactivated successfully. Apr 23 00:14:30.391232 systemd[1]: session-31.scope: Deactivated successfully. Apr 23 00:14:30.391910 systemd[1]: session-31.scope: Consumed 1.367s CPU time, 15.4M memory peak. Apr 23 00:14:30.402378 systemd-logind[1614]: Session 31 logged out. Waiting for processes to exit. Apr 23 00:14:30.517414 systemd-logind[1614]: Removed session 31. Apr 23 00:14:30.564074 kubelet[3117]: E0423 00:14:30.563510 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:35.388798 systemd[1]: Started sshd@30-10.0.0.19:22-10.0.0.1:46464.service - OpenSSH per-connection server daemon (10.0.0.1:46464). Apr 23 00:14:36.202306 sshd[6549]: Accepted publickey for core from 10.0.0.1 port 46464 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:36.277282 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:36.356644 systemd-logind[1614]: New session 32 of user core. Apr 23 00:14:36.363319 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 23 00:14:37.344066 sshd[6559]: Connection closed by 10.0.0.1 port 46464 Apr 23 00:14:37.347504 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:37.389826 systemd[1]: sshd@30-10.0.0.19:22-10.0.0.1:46464.service: Deactivated successfully. Apr 23 00:14:37.492246 systemd[1]: session-32.scope: Deactivated successfully. Apr 23 00:14:37.517036 systemd-logind[1614]: Session 32 logged out. Waiting for processes to exit. Apr 23 00:14:37.536042 systemd-logind[1614]: Removed session 32. Apr 23 00:14:37.551907 kubelet[3117]: E0423 00:14:37.551706 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:42.627722 systemd[1]: Started sshd@31-10.0.0.19:22-10.0.0.1:46476.service - OpenSSH per-connection server daemon (10.0.0.1:46476). Apr 23 00:14:43.050987 sshd[6606]: Accepted publickey for core from 10.0.0.1 port 46476 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:43.053929 sshd-session[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:43.072419 systemd-logind[1614]: New session 33 of user core. Apr 23 00:14:43.078689 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 23 00:14:43.414427 sshd[6610]: Connection closed by 10.0.0.1 port 46476 Apr 23 00:14:43.419085 sshd-session[6606]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:43.428369 systemd[1]: sshd@31-10.0.0.19:22-10.0.0.1:46476.service: Deactivated successfully. Apr 23 00:14:43.431444 systemd[1]: session-33.scope: Deactivated successfully. Apr 23 00:14:43.432897 systemd-logind[1614]: Session 33 logged out. Waiting for processes to exit. Apr 23 00:14:43.434336 systemd-logind[1614]: Removed session 33. Apr 23 00:14:43.586804 kubelet[3117]: E0423 00:14:43.586155 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:48.732135 systemd[1]: Started sshd@32-10.0.0.19:22-10.0.0.1:34396.service - OpenSSH per-connection server daemon (10.0.0.1:34396). Apr 23 00:14:49.761375 sshd[6644]: Accepted publickey for core from 10.0.0.1 port 34396 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:49.870271 sshd-session[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:49.960914 systemd-logind[1614]: New session 34 of user core. Apr 23 00:14:50.090482 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 23 00:14:51.229141 containerd[1644]: time="2026-04-23T00:14:51.228867357Z" level=info msg="container event discarded" container=7dd25a31dd4504e608a67f063e3c3da46ff53c9bdd97567ce9da92bb1b3f60ab type=CONTAINER_STOPPED_EVENT Apr 23 00:14:51.702418 sshd[6648]: Connection closed by 10.0.0.1 port 34396 Apr 23 00:14:51.721536 sshd-session[6644]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:51.900784 systemd[1]: sshd@32-10.0.0.19:22-10.0.0.1:34396.service: Deactivated successfully. Apr 23 00:14:51.952951 systemd[1]: session-34.scope: Deactivated successfully. Apr 23 00:14:51.960430 systemd[1]: session-34.scope: Consumed 1.432s CPU time, 16.4M memory peak. Apr 23 00:14:51.962913 systemd-logind[1614]: Session 34 logged out. Waiting for processes to exit. Apr 23 00:14:51.990303 systemd-logind[1614]: Removed session 34. Apr 23 00:14:52.664723 kubelet[3117]: I0423 00:14:52.663885 3117 scope.go:122] "RemoveContainer" containerID="2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" Apr 23 00:14:52.676498 kubelet[3117]: E0423 00:14:52.676138 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:52.690410 kubelet[3117]: E0423 00:14:52.689978 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:14:54.970850 containerd[1644]: time="2026-04-23T00:14:54.969246923Z" level=info msg="container event discarded" container=593d9fe1acb2fe55e2b2afa046802883052906f9130b00b43f02f748ceb4ab9b type=CONTAINER_DELETED_EVENT Apr 23 00:14:56.958402 systemd[1]: Started sshd@33-10.0.0.19:22-10.0.0.1:44126.service - OpenSSH per-connection server daemon (10.0.0.1:44126). Apr 23 00:14:58.360391 sshd[6684]: Accepted publickey for core from 10.0.0.1 port 44126 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:58.464435 sshd-session[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:58.572111 systemd-logind[1614]: New session 35 of user core. Apr 23 00:14:58.695066 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 23 00:15:01.079295 sshd[6706]: Connection closed by 10.0.0.1 port 44126 Apr 23 00:15:01.083469 sshd-session[6684]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:01.162424 systemd[1]: sshd@33-10.0.0.19:22-10.0.0.1:44126.service: Deactivated successfully. Apr 23 00:15:01.256425 systemd[1]: session-35.scope: Deactivated successfully. Apr 23 00:15:01.262334 systemd[1]: session-35.scope: Consumed 1.898s CPU time, 18M memory peak. Apr 23 00:15:01.279161 systemd-logind[1614]: Session 35 logged out. Waiting for processes to exit. Apr 23 00:15:01.291201 systemd-logind[1614]: Removed session 35. Apr 23 00:15:06.339880 systemd[1]: Started sshd@34-10.0.0.19:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Apr 23 00:15:07.198916 sshd[6742]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:07.234730 sshd-session[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:07.484515 systemd-logind[1614]: New session 36 of user core. Apr 23 00:15:07.593907 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 23 00:15:09.893548 sshd[6752]: Connection closed by 10.0.0.1 port 42948 Apr 23 00:15:09.898839 sshd-session[6742]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:09.911429 systemd[1]: sshd@34-10.0.0.19:22-10.0.0.1:42948.service: Deactivated successfully. Apr 23 00:15:09.922520 systemd[1]: session-36.scope: Deactivated successfully. Apr 23 00:15:09.923052 systemd[1]: session-36.scope: Consumed 1.687s CPU time, 14.5M memory peak. Apr 23 00:15:09.927126 systemd-logind[1614]: Session 36 logged out. Waiting for processes to exit. Apr 23 00:15:09.933439 systemd-logind[1614]: Removed session 36. Apr 23 00:15:14.558108 kubelet[3117]: E0423 00:15:14.556778 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:15.058697 systemd[1]: Started sshd@35-10.0.0.19:22-10.0.0.1:42958.service - OpenSSH per-connection server daemon (10.0.0.1:42958). Apr 23 00:15:15.642341 sshd[6800]: Accepted publickey for core from 10.0.0.1 port 42958 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:15.668465 sshd-session[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:15.912115 systemd-logind[1614]: New session 37 of user core. Apr 23 00:15:16.014552 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 23 00:15:16.554863 kubelet[3117]: E0423 00:15:16.549509 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:17.298733 sshd[6804]: Connection closed by 10.0.0.1 port 42958 Apr 23 00:15:17.300228 sshd-session[6800]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:17.324166 systemd[1]: sshd@35-10.0.0.19:22-10.0.0.1:42958.service: Deactivated successfully. Apr 23 00:15:17.338420 systemd[1]: session-37.scope: Deactivated successfully. Apr 23 00:15:17.346229 systemd-logind[1614]: Session 37 logged out. Waiting for processes to exit. Apr 23 00:15:17.351208 systemd-logind[1614]: Removed session 37. Apr 23 00:15:22.359775 systemd[1]: Started sshd@36-10.0.0.19:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). Apr 23 00:15:22.841689 sshd[6837]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:22.847095 sshd-session[6837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:23.011993 systemd-logind[1614]: New session 38 of user core. Apr 23 00:15:23.040344 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 23 00:15:23.824133 sshd[6847]: Connection closed by 10.0.0.1 port 36686 Apr 23 00:15:23.860546 sshd-session[6837]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:23.895474 systemd[1]: sshd@36-10.0.0.19:22-10.0.0.1:36686.service: Deactivated successfully. Apr 23 00:15:23.946507 systemd[1]: session-38.scope: Deactivated successfully. Apr 23 00:15:23.963927 systemd-logind[1614]: Session 38 logged out. Waiting for processes to exit. Apr 23 00:15:23.990887 systemd-logind[1614]: Removed session 38. Apr 23 00:15:29.156092 systemd[1]: Started sshd@37-10.0.0.19:22-10.0.0.1:58912.service - OpenSSH per-connection server daemon (10.0.0.1:58912). Apr 23 00:15:30.051753 sshd[6881]: Accepted publickey for core from 10.0.0.1 port 58912 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:30.087368 sshd-session[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:30.263201 systemd-logind[1614]: New session 39 of user core. Apr 23 00:15:30.356009 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 23 00:15:32.172921 sshd[6887]: Connection closed by 10.0.0.1 port 58912 Apr 23 00:15:32.180012 sshd-session[6881]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:32.234489 systemd[1]: sshd@37-10.0.0.19:22-10.0.0.1:58912.service: Deactivated successfully. Apr 23 00:15:32.422453 systemd[1]: session-39.scope: Deactivated successfully. Apr 23 00:15:32.429163 systemd[1]: session-39.scope: Consumed 1.267s CPU time, 16M memory peak. Apr 23 00:15:32.449217 systemd-logind[1614]: Session 39 logged out. Waiting for processes to exit. Apr 23 00:15:32.472516 systemd-logind[1614]: Removed session 39. Apr 23 00:15:33.613804 kubelet[3117]: E0423 00:15:33.608539 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:37.377921 systemd[1]: Started sshd@38-10.0.0.19:22-10.0.0.1:53922.service - OpenSSH per-connection server daemon (10.0.0.1:53922). Apr 23 00:15:37.554503 kubelet[3117]: E0423 00:15:37.553832 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:38.068862 sshd[6937]: Accepted publickey for core from 10.0.0.1 port 53922 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:38.110845 sshd-session[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:38.279915 systemd-logind[1614]: New session 40 of user core. Apr 23 00:15:38.322458 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 23 00:15:39.684690 sshd[6944]: Connection closed by 10.0.0.1 port 53922 Apr 23 00:15:39.695533 sshd-session[6937]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:39.795110 systemd[1]: sshd@38-10.0.0.19:22-10.0.0.1:53922.service: Deactivated successfully. Apr 23 00:15:39.823182 systemd[1]: session-40.scope: Deactivated successfully. Apr 23 00:15:39.830846 systemd[1]: session-40.scope: Consumed 1.102s CPU time, 16.3M memory peak. Apr 23 00:15:39.839337 systemd-logind[1614]: Session 40 logged out. Waiting for processes to exit. Apr 23 00:15:39.843342 systemd-logind[1614]: Removed session 40. Apr 23 00:15:44.574218 kubelet[3117]: E0423 00:15:44.573554 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:44.928736 systemd[1]: Started sshd@39-10.0.0.19:22-10.0.0.1:53932.service - OpenSSH per-connection server daemon (10.0.0.1:53932). Apr 23 00:15:45.547727 sshd[6980]: Accepted publickey for core from 10.0.0.1 port 53932 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:45.612486 sshd-session[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:45.713389 systemd-logind[1614]: New session 41 of user core. Apr 23 00:15:45.809508 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 23 00:15:48.285368 sshd[6984]: Connection closed by 10.0.0.1 port 53932 Apr 23 00:15:48.297005 sshd-session[6980]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:48.390796 systemd[1]: sshd@39-10.0.0.19:22-10.0.0.1:53932.service: Deactivated successfully. Apr 23 00:15:48.433016 systemd[1]: session-41.scope: Deactivated successfully. Apr 23 00:15:48.455449 systemd[1]: session-41.scope: Consumed 1.909s CPU time, 18.8M memory peak. Apr 23 00:15:48.459006 systemd-logind[1614]: Session 41 logged out. Waiting for processes to exit. Apr 23 00:15:48.488671 systemd-logind[1614]: Removed session 41. Apr 23 00:15:53.533770 systemd[1]: Started sshd@40-10.0.0.19:22-10.0.0.1:42114.service - OpenSSH per-connection server daemon (10.0.0.1:42114). Apr 23 00:15:54.150136 sshd[7032]: Accepted publickey for core from 10.0.0.1 port 42114 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:54.152848 sshd-session[7032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:54.492063 systemd-logind[1614]: New session 42 of user core. Apr 23 00:15:54.514452 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 23 00:15:54.589020 kubelet[3117]: I0423 00:15:54.588054 3117 scope.go:122] "RemoveContainer" containerID="2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" Apr 23 00:15:54.629058 kubelet[3117]: E0423 00:15:54.628138 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:54.645106 kubelet[3117]: E0423 00:15:54.645015 3117 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 23 00:15:56.189724 sshd[7042]: Connection closed by 10.0.0.1 port 42114 Apr 23 00:15:56.192674 sshd-session[7032]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:56.256191 systemd[1]: sshd@40-10.0.0.19:22-10.0.0.1:42114.service: Deactivated successfully. Apr 23 00:15:56.302007 systemd[1]: session-42.scope: Deactivated successfully. Apr 23 00:15:56.302652 systemd[1]: session-42.scope: Consumed 1.246s CPU time, 16.1M memory peak. Apr 23 00:15:56.349773 systemd-logind[1614]: Session 42 logged out. Waiting for processes to exit. Apr 23 00:15:56.352190 systemd-logind[1614]: Removed session 42. Apr 23 00:16:01.448520 systemd[1]: Started sshd@41-10.0.0.19:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Apr 23 00:16:02.750267 sshd[7076]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:02.857874 sshd-session[7076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:03.146452 systemd-logind[1614]: New session 43 of user core. Apr 23 00:16:03.167293 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 23 00:16:06.154443 sshd[7090]: Connection closed by 10.0.0.1 port 39960 Apr 23 00:16:06.157011 sshd-session[7076]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:06.185537 systemd[1]: sshd@41-10.0.0.19:22-10.0.0.1:39960.service: Deactivated successfully. Apr 23 00:16:06.226299 systemd[1]: session-43.scope: Deactivated successfully. Apr 23 00:16:06.227541 systemd[1]: session-43.scope: Consumed 2.137s CPU time, 16M memory peak. Apr 23 00:16:06.258285 systemd-logind[1614]: Session 43 logged out. Waiting for processes to exit. Apr 23 00:16:06.273158 systemd-logind[1614]: Removed session 43. Apr 23 00:16:06.595389 kubelet[3117]: E0423 00:16:06.593979 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:11.217506 systemd[1]: Started sshd@42-10.0.0.19:22-10.0.0.1:44876.service - OpenSSH per-connection server daemon (10.0.0.1:44876). Apr 23 00:16:11.742864 sshd[7136]: Accepted publickey for core from 10.0.0.1 port 44876 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:11.864038 sshd-session[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:12.143235 systemd-logind[1614]: New session 44 of user core. Apr 23 00:16:12.183295 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 23 00:16:14.296756 sshd[7140]: Connection closed by 10.0.0.1 port 44876 Apr 23 00:16:14.299906 sshd-session[7136]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:14.383228 systemd[1]: sshd@42-10.0.0.19:22-10.0.0.1:44876.service: Deactivated successfully. Apr 23 00:16:14.396678 systemd[1]: session-44.scope: Deactivated successfully. Apr 23 00:16:14.397495 systemd[1]: session-44.scope: Consumed 1.718s CPU time, 15M memory peak. Apr 23 00:16:14.407666 systemd-logind[1614]: Session 44 logged out. Waiting for processes to exit. Apr 23 00:16:14.464157 systemd-logind[1614]: Removed session 44. Apr 23 00:16:18.341053 containerd[1644]: time="2026-04-23T00:16:18.339481095Z" level=info msg="container event discarded" container=2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d type=CONTAINER_CREATED_EVENT Apr 23 00:16:19.414235 systemd[1]: Started sshd@43-10.0.0.19:22-10.0.0.1:49496.service - OpenSSH per-connection server daemon (10.0.0.1:49496). Apr 23 00:16:20.180419 sshd[7185]: Accepted publickey for core from 10.0.0.1 port 49496 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:20.204012 sshd-session[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:20.365410 systemd-logind[1614]: New session 45 of user core. Apr 23 00:16:20.399247 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 23 00:16:20.592498 kubelet[3117]: E0423 00:16:20.590048 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:22.388476 sshd[7197]: Connection closed by 10.0.0.1 port 49496 Apr 23 00:16:22.393133 sshd-session[7185]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:22.474670 systemd-logind[1614]: Session 45 logged out. Waiting for processes to exit. Apr 23 00:16:22.485551 systemd[1]: sshd@43-10.0.0.19:22-10.0.0.1:49496.service: Deactivated successfully. Apr 23 00:16:22.549220 systemd[1]: session-45.scope: Deactivated successfully. Apr 23 00:16:22.554324 systemd[1]: session-45.scope: Consumed 1.486s CPU time, 16.2M memory peak. Apr 23 00:16:22.565734 kubelet[3117]: I0423 00:16:22.562754 3117 scope.go:122] "RemoveContainer" containerID="2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d" Apr 23 00:16:22.583222 systemd-logind[1614]: Removed session 45. Apr 23 00:16:22.592037 kubelet[3117]: E0423 00:16:22.591789 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:22.664415 containerd[1644]: time="2026-04-23T00:16:22.656043275Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:8,}" Apr 23 00:16:22.862220 containerd[1644]: time="2026-04-23T00:16:22.855318506Z" level=info msg="Container fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:16:23.011844 containerd[1644]: time="2026-04-23T00:16:23.008312629Z" level=info msg="CreateContainer within sandbox \"1fc3e0f9ad2e10d545a394e21fc03e0f63fc685bf45a6bf5fe9c6e0662ae8a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:8,} returns container id \"fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047\"" Apr 23 00:16:23.025146 containerd[1644]: time="2026-04-23T00:16:23.024310284Z" level=info msg="StartContainer for \"fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047\"" Apr 23 00:16:23.040029 containerd[1644]: time="2026-04-23T00:16:23.039212262Z" level=info msg="connecting to shim fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047" address="unix:///run/containerd/s/6ba32b72164cd91bf659ca1b461d59fa9373c7c833adb85e108c1f63f7cb4764" protocol=ttrpc version=3 Apr 23 00:16:23.278800 systemd[1]: Started cri-containerd-fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047.scope - libcontainer container fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047. Apr 23 00:16:23.803212 containerd[1644]: time="2026-04-23T00:16:23.802136795Z" level=info msg="StartContainer for \"fdd00c91d26a5e14e6422171a65d2c512665796726ac2e0514d3c9ac530a9047\" returns successfully" Apr 23 00:16:24.186301 kubelet[3117]: E0423 00:16:24.185093 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:25.907026 kubelet[3117]: E0423 00:16:25.905335 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:27.536923 systemd[1]: Started sshd@44-10.0.0.19:22-10.0.0.1:40046.service - OpenSSH per-connection server daemon (10.0.0.1:40046). Apr 23 00:16:27.565281 kubelet[3117]: E0423 00:16:27.559930 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:28.148092 sshd[7264]: Accepted publickey for core from 10.0.0.1 port 40046 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:28.164074 sshd-session[7264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:28.390886 systemd-logind[1614]: New session 46 of user core. Apr 23 00:16:28.497249 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 23 00:16:30.463552 containerd[1644]: time="2026-04-23T00:16:30.462767760Z" level=info msg="container event discarded" container=2cf2ab21ff7ddb1d11af4bcdfe8d27f9e1793cccac4b37ad763759268d35146d type=CONTAINER_STARTED_EVENT Apr 23 00:16:30.690458 sshd[7268]: Connection closed by 10.0.0.1 port 40046 Apr 23 00:16:30.704839 sshd-session[7264]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:30.911257 systemd[1]: sshd@44-10.0.0.19:22-10.0.0.1:40046.service: Deactivated successfully. Apr 23 00:16:30.973264 systemd[1]: session-46.scope: Deactivated successfully. Apr 23 00:16:31.051274 systemd[1]: session-46.scope: Consumed 1.697s CPU time, 16.4M memory peak. Apr 23 00:16:31.087800 systemd-logind[1614]: Session 46 logged out. Waiting for processes to exit. Apr 23 00:16:31.094997 systemd-logind[1614]: Removed session 46. Apr 23 00:16:35.885021 systemd[1]: Started sshd@45-10.0.0.19:22-10.0.0.1:33492.service - OpenSSH per-connection server daemon (10.0.0.1:33492). Apr 23 00:16:36.048276 kubelet[3117]: E0423 00:16:36.048143 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:37.476970 sshd[7315]: Accepted publickey for core from 10.0.0.1 port 33492 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:37.491010 sshd-session[7315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:37.798046 systemd-logind[1614]: New session 47 of user core. Apr 23 00:16:37.849514 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 23 00:16:40.970132 sshd[7329]: Connection closed by 10.0.0.1 port 33492 Apr 23 00:16:40.982446 sshd-session[7315]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:41.137016 systemd[1]: sshd@45-10.0.0.19:22-10.0.0.1:33492.service: Deactivated successfully. Apr 23 00:16:41.157996 systemd[1]: session-47.scope: Deactivated successfully. Apr 23 00:16:41.158325 systemd[1]: session-47.scope: Consumed 2.540s CPU time, 16.3M memory peak. Apr 23 00:16:41.172422 systemd-logind[1614]: Session 47 logged out. Waiting for processes to exit. Apr 23 00:16:41.314978 systemd[1]: Starting logrotate.service - Rotate log files... Apr 23 00:16:41.328164 systemd-logind[1614]: Removed session 47. Apr 23 00:16:42.316734 systemd[1]: logrotate.service: Deactivated successfully. Apr 23 00:16:42.317499 systemd[1]: Finished logrotate.service - Rotate log files. Apr 23 00:16:46.075867 systemd[1]: Started sshd@46-10.0.0.19:22-10.0.0.1:45866.service - OpenSSH per-connection server daemon (10.0.0.1:45866). Apr 23 00:16:46.683870 sshd[7374]: Accepted publickey for core from 10.0.0.1 port 45866 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:46.763022 sshd-session[7374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:46.912964 systemd-logind[1614]: New session 48 of user core. Apr 23 00:16:46.948622 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 23 00:16:48.886074 sshd[7379]: Connection closed by 10.0.0.1 port 45866 Apr 23 00:16:48.895157 sshd-session[7374]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:48.961042 systemd[1]: sshd@46-10.0.0.19:22-10.0.0.1:45866.service: Deactivated successfully. Apr 23 00:16:49.015233 systemd[1]: session-48.scope: Deactivated successfully. Apr 23 00:16:49.017667 systemd[1]: session-48.scope: Consumed 1.483s CPU time, 14.9M memory peak. Apr 23 00:16:49.110941 systemd-logind[1614]: Session 48 logged out. Waiting for processes to exit. Apr 23 00:16:49.148002 systemd-logind[1614]: Removed session 48. Apr 23 00:16:50.555357 kubelet[3117]: E0423 00:16:50.553950 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:54.068198 systemd[1]: Started sshd@47-10.0.0.19:22-10.0.0.1:45870.service - OpenSSH per-connection server daemon (10.0.0.1:45870). Apr 23 00:16:54.667500 sshd[7425]: Accepted publickey for core from 10.0.0.1 port 45870 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:54.674871 kubelet[3117]: E0423 00:16:54.670001 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:54.685380 sshd-session[7425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:55.023193 systemd-logind[1614]: New session 49 of user core. Apr 23 00:16:55.040767 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 23 00:16:59.669033 sshd[7429]: Connection closed by 10.0.0.1 port 45870 Apr 23 00:16:59.664133 sshd-session[7425]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:59.749140 containerd[1644]: time="2026-04-23T00:16:59.716016090Z" level=info msg="container event discarded" container=18d6934792aa16c489d730fea71b4d0b880143b6342c8308ab99a47b296803cb type=CONTAINER_STOPPED_EVENT Apr 23 00:16:59.753078 systemd-logind[1614]: Session 49 logged out. Waiting for processes to exit. Apr 23 00:16:59.758146 systemd[1]: sshd@47-10.0.0.19:22-10.0.0.1:45870.service: Deactivated successfully. Apr 23 00:16:59.764911 systemd[1]: session-49.scope: Deactivated successfully. Apr 23 00:16:59.772931 systemd[1]: session-49.scope: Consumed 2.453s CPU time, 16.2M memory peak. Apr 23 00:16:59.790071 systemd-logind[1614]: Removed session 49. Apr 23 00:17:00.505093 containerd[1644]: time="2026-04-23T00:17:00.504039815Z" level=info msg="container event discarded" container=fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd type=CONTAINER_CREATED_EVENT Apr 23 00:17:01.573527 kubelet[3117]: E0423 00:17:01.573126 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:03.301326 containerd[1644]: time="2026-04-23T00:17:03.298247638Z" level=info msg="container event discarded" container=fcb3332cacb27ead2fa6fec4748e1bee8aabe80b71b18923c413322efc316ddd type=CONTAINER_STARTED_EVENT Apr 23 00:17:05.238377 systemd[1]: Started sshd@48-10.0.0.19:22-10.0.0.1:46504.service - OpenSSH per-connection server daemon (10.0.0.1:46504). Apr 23 00:17:07.183348 sshd[7493]: Accepted publickey for core from 10.0.0.1 port 46504 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:07.260472 sshd-session[7493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:07.658474 kubelet[3117]: E0423 00:17:07.658442 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:07.925237 systemd-logind[1614]: New session 50 of user core. Apr 23 00:17:08.447459 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 23 00:17:09.947281 kubelet[3117]: E0423 00:17:09.944545 3117 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.396s" Apr 23 00:17:18.374077 sshd[7503]: Connection closed by 10.0.0.1 port 46504 Apr 23 00:17:18.388820 sshd-session[7493]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:18.550075 systemd[1]: sshd@48-10.0.0.19:22-10.0.0.1:46504.service: Deactivated successfully. Apr 23 00:17:18.726315 systemd[1]: session-50.scope: Deactivated successfully. Apr 23 00:17:18.732903 systemd[1]: session-50.scope: Consumed 5.489s CPU time, 15.8M memory peak. Apr 23 00:17:18.765826 systemd-logind[1614]: Session 50 logged out. Waiting for processes to exit. Apr 23 00:17:18.796185 systemd-logind[1614]: Removed session 50. Apr 23 00:17:23.489541 systemd[1]: Started sshd@49-10.0.0.19:22-10.0.0.1:49480.service - OpenSSH per-connection server daemon (10.0.0.1:49480). Apr 23 00:17:24.548320 sshd[7571]: Accepted publickey for core from 10.0.0.1 port 49480 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:24.562908 sshd-session[7571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:24.756828 systemd-logind[1614]: New session 51 of user core. Apr 23 00:17:24.809865 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 23 00:17:26.861093 sshd[7582]: Connection closed by 10.0.0.1 port 49480 Apr 23 00:17:26.875307 sshd-session[7571]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:27.111714 systemd[1]: sshd@49-10.0.0.19:22-10.0.0.1:49480.service: Deactivated successfully. Apr 23 00:17:27.198252 systemd[1]: session-51.scope: Deactivated successfully. Apr 23 00:17:27.202275 systemd[1]: session-51.scope: Consumed 1.322s CPU time, 16.6M memory peak. Apr 23 00:17:27.209389 systemd-logind[1614]: Session 51 logged out. Waiting for processes to exit. Apr 23 00:17:27.284386 systemd[1]: Started sshd@50-10.0.0.19:22-10.0.0.1:55050.service - OpenSSH per-connection server daemon (10.0.0.1:55050). Apr 23 00:17:27.292779 systemd-logind[1614]: Removed session 51. Apr 23 00:17:27.659469 sshd[7595]: Accepted publickey for core from 10.0.0.1 port 55050 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:27.672066 sshd-session[7595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:27.762740 systemd-logind[1614]: New session 52 of user core. Apr 23 00:17:27.775666 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 23 00:17:29.508701 sshd[7599]: Connection closed by 10.0.0.1 port 55050 Apr 23 00:17:29.514977 sshd-session[7595]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:29.593673 systemd[1]: sshd@50-10.0.0.19:22-10.0.0.1:55050.service: Deactivated successfully. Apr 23 00:17:29.603316 systemd[1]: session-52.scope: Deactivated successfully. Apr 23 00:17:29.603874 systemd[1]: session-52.scope: Consumed 1.287s CPU time, 28.4M memory peak. Apr 23 00:17:29.618988 systemd-logind[1614]: Session 52 logged out. Waiting for processes to exit. Apr 23 00:17:29.657735 systemd[1]: Started sshd@51-10.0.0.19:22-10.0.0.1:55062.service - OpenSSH per-connection server daemon (10.0.0.1:55062). Apr 23 00:17:29.663218 systemd-logind[1614]: Removed session 52. Apr 23 00:17:30.051271 sshd[7632]: Accepted publickey for core from 10.0.0.1 port 55062 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:30.058287 sshd-session[7632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:30.135949 systemd-logind[1614]: New session 53 of user core. Apr 23 00:17:30.152367 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 23 00:17:36.648514 kubelet[3117]: E0423 00:17:36.648063 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:40.589635 kubelet[3117]: E0423 00:17:40.588075 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:48.220444 sshd[7637]: Connection closed by 10.0.0.1 port 55062 Apr 23 00:17:48.219903 sshd-session[7632]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:48.375070 systemd[1]: sshd@51-10.0.0.19:22-10.0.0.1:55062.service: Deactivated successfully. Apr 23 00:17:48.378880 systemd[1]: session-53.scope: Deactivated successfully. Apr 23 00:17:48.384013 systemd[1]: session-53.scope: Consumed 7.226s CPU time, 41.4M memory peak. Apr 23 00:17:48.390084 systemd-logind[1614]: Session 53 logged out. Waiting for processes to exit. Apr 23 00:17:48.478635 systemd[1]: Started sshd@52-10.0.0.19:22-10.0.0.1:49552.service - OpenSSH per-connection server daemon (10.0.0.1:49552). Apr 23 00:17:48.617965 systemd-logind[1614]: Removed session 53. Apr 23 00:17:50.344838 sshd[7718]: Accepted publickey for core from 10.0.0.1 port 49552 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:50.409300 sshd-session[7718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:50.633995 systemd-logind[1614]: New session 54 of user core. Apr 23 00:17:50.697909 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 23 00:17:53.575776 kubelet[3117]: E0423 00:17:53.575262 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:55.804012 sshd[7724]: Connection closed by 10.0.0.1 port 49552 Apr 23 00:17:55.806360 sshd-session[7718]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:55.881676 systemd[1]: sshd@52-10.0.0.19:22-10.0.0.1:49552.service: Deactivated successfully. Apr 23 00:17:55.913037 systemd[1]: session-54.scope: Deactivated successfully. Apr 23 00:17:55.913728 systemd[1]: session-54.scope: Consumed 3.543s CPU time, 25.8M memory peak. Apr 23 00:17:55.940235 systemd-logind[1614]: Session 54 logged out. Waiting for processes to exit. Apr 23 00:17:55.960210 systemd[1]: Started sshd@53-10.0.0.19:22-10.0.0.1:57890.service - OpenSSH per-connection server daemon (10.0.0.1:57890). Apr 23 00:17:55.969247 systemd-logind[1614]: Removed session 54. Apr 23 00:17:56.365959 sshd[7754]: Accepted publickey for core from 10.0.0.1 port 57890 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:56.377078 sshd-session[7754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:56.512204 systemd-logind[1614]: New session 55 of user core. Apr 23 00:17:56.561897 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 23 00:17:57.849958 sshd[7762]: Connection closed by 10.0.0.1 port 57890 Apr 23 00:17:57.862253 sshd-session[7754]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:57.899307 systemd[1]: sshd@53-10.0.0.19:22-10.0.0.1:57890.service: Deactivated successfully. Apr 23 00:17:58.009121 systemd[1]: session-55.scope: Deactivated successfully. Apr 23 00:17:58.017362 systemd[1]: session-55.scope: Consumed 1.008s CPU time, 16.2M memory peak. Apr 23 00:17:58.038853 systemd-logind[1614]: Session 55 logged out. Waiting for processes to exit. Apr 23 00:17:58.043268 systemd-logind[1614]: Removed session 55. Apr 23 00:18:03.153085 systemd[1]: Started sshd@54-10.0.0.19:22-10.0.0.1:57906.service - OpenSSH per-connection server daemon (10.0.0.1:57906). Apr 23 00:18:03.455893 sshd[7808]: Accepted publickey for core from 10.0.0.1 port 57906 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:03.457331 sshd-session[7808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:03.644140 systemd-logind[1614]: New session 56 of user core. Apr 23 00:18:03.662378 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 23 00:18:04.895966 kubelet[3117]: E0423 00:18:04.895767 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:05.570162 kubelet[3117]: E0423 00:18:05.569902 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:05.811407 sshd[7816]: Connection closed by 10.0.0.1 port 57906 Apr 23 00:18:05.819141 sshd-session[7808]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:05.918117 systemd[1]: sshd@54-10.0.0.19:22-10.0.0.1:57906.service: Deactivated successfully. Apr 23 00:18:06.064990 systemd[1]: session-56.scope: Deactivated successfully. Apr 23 00:18:06.065941 systemd[1]: session-56.scope: Consumed 1.597s CPU time, 17.7M memory peak. Apr 23 00:18:06.113153 systemd-logind[1614]: Session 56 logged out. Waiting for processes to exit. Apr 23 00:18:06.134952 systemd-logind[1614]: Removed session 56. Apr 23 00:18:08.565692 kubelet[3117]: E0423 00:18:08.564657 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:10.938173 systemd[1]: Started sshd@55-10.0.0.19:22-10.0.0.1:42658.service - OpenSSH per-connection server daemon (10.0.0.1:42658). Apr 23 00:18:11.614355 sshd[7854]: Accepted publickey for core from 10.0.0.1 port 42658 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:11.625390 sshd-session[7854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:11.831483 systemd-logind[1614]: New session 57 of user core. Apr 23 00:18:11.884171 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 23 00:18:15.660120 kubelet[3117]: E0423 00:18:15.659413 3117 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:16.155947 sshd[7864]: Connection closed by 10.0.0.1 port 42658 Apr 23 00:18:16.210165 sshd-session[7854]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:16.291342 systemd[1]: sshd@55-10.0.0.19:22-10.0.0.1:42658.service: Deactivated successfully. Apr 23 00:18:16.455905 systemd[1]: session-57.scope: Deactivated successfully. Apr 23 00:18:16.456304 systemd[1]: session-57.scope: Consumed 3.056s CPU time, 14.8M memory peak. Apr 23 00:18:16.475651 systemd-logind[1614]: Session 57 logged out. Waiting for processes to exit. Apr 23 00:18:16.639807 systemd-logind[1614]: Removed session 57. Apr 23 00:18:21.285352 systemd[1]: Started sshd@56-10.0.0.19:22-10.0.0.1:33690.service - OpenSSH per-connection server daemon (10.0.0.1:33690). Apr 23 00:18:22.053522 sshd[7911]: Accepted publickey for core from 10.0.0.1 port 33690 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:22.063662 sshd-session[7911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:22.181216 systemd-logind[1614]: New session 58 of user core. Apr 23 00:18:22.274779 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 23 00:18:24.714467 sshd[7922]: Connection closed by 10.0.0.1 port 33690 Apr 23 00:18:24.716993 sshd-session[7911]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:24.725912 systemd[1]: sshd@56-10.0.0.19:22-10.0.0.1:33690.service: Deactivated successfully. Apr 23 00:18:24.849026 systemd[1]: session-58.scope: Deactivated successfully. Apr 23 00:18:24.856767 systemd[1]: session-58.scope: Consumed 1.940s CPU time, 15.9M memory peak. Apr 23 00:18:24.862190 systemd-logind[1614]: Session 58 logged out. Waiting for processes to exit. Apr 23 00:18:24.877771 systemd-logind[1614]: Removed session 58.