Apr 20 19:23:46.564055 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 19:23:46.564121 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:23:46.564136 kernel: BIOS-provided physical RAM map: Apr 20 19:23:46.564145 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 20 19:23:46.564156 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 20 19:23:46.564168 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 20 19:23:46.564182 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 20 19:23:46.564193 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 20 19:23:46.564205 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 20 19:23:46.564216 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 20 19:23:46.564225 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 19:23:46.564234 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 20 19:23:46.564245 kernel: NX (Execute Disable) protection: active Apr 20 19:23:46.564256 kernel: APIC: Static calls initialized Apr 20 19:23:46.564269 kernel: SMBIOS 2.8 present. Apr 20 19:23:46.564282 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 20 19:23:46.564291 kernel: DMI: Memory slots populated: 1/1 Apr 20 19:23:46.564301 kernel: Hypervisor detected: KVM Apr 20 19:23:46.564312 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 20 19:23:46.564322 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 19:23:46.564334 kernel: kvm-clock: using sched offset of 8629667663 cycles Apr 20 19:23:46.564347 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 19:23:46.564360 kernel: tsc: Detected 2793.438 MHz processor Apr 20 19:23:46.564373 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 19:23:46.564390 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 19:23:46.564400 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 20 19:23:46.564412 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 20 19:23:46.564427 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 19:23:46.564440 kernel: Using GB pages for direct mapping Apr 20 19:23:46.564450 kernel: ACPI: Early table checksum verification disabled Apr 20 19:23:46.564463 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 20 19:23:46.564476 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564491 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564506 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564519 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 20 19:23:46.564532 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564542 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564553 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564563 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:23:46.564576 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 20 19:23:46.564595 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 20 19:23:46.564605 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 20 19:23:46.564616 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 20 19:23:46.564627 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 20 19:23:46.564637 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 20 19:23:46.564650 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 20 19:23:46.564661 kernel: No NUMA configuration found Apr 20 19:23:46.564671 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 20 19:23:46.564682 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 20 19:23:46.564693 kernel: Zone ranges: Apr 20 19:23:46.564704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 19:23:46.564715 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 20 19:23:46.564728 kernel: Normal empty Apr 20 19:23:46.564739 kernel: Device empty Apr 20 19:23:46.564749 kernel: Movable zone start for each node Apr 20 19:23:46.564763 kernel: Early memory node ranges Apr 20 19:23:46.564776 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 20 19:23:46.564787 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 20 19:23:46.564797 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 20 19:23:46.564808 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 19:23:46.564822 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 20 19:23:46.564833 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 20 19:23:46.564843 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 19:23:46.564854 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 19:23:46.564865 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 19:23:46.564879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 19:23:46.564890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 19:23:46.564927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 19:23:46.564937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 19:23:46.564944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 19:23:46.564952 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 19:23:46.564960 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 19:23:46.564967 kernel: TSC deadline timer available Apr 20 19:23:46.564975 kernel: CPU topo: Max. logical packages: 1 Apr 20 19:23:46.564984 kernel: CPU topo: Max. logical dies: 1 Apr 20 19:23:46.564995 kernel: CPU topo: Max. dies per package: 1 Apr 20 19:23:46.565236 kernel: CPU topo: Max. threads per core: 1 Apr 20 19:23:46.565281 kernel: CPU topo: Num. cores per package: 4 Apr 20 19:23:46.565290 kernel: CPU topo: Num. threads per package: 4 Apr 20 19:23:46.565299 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 19:23:46.565307 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 19:23:46.565317 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 19:23:46.565361 kernel: kvm-guest: setup PV sched yield Apr 20 19:23:46.565367 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 20 19:23:46.565373 kernel: Booting paravirtualized kernel on KVM Apr 20 19:23:46.565379 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 19:23:46.565385 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 19:23:46.565391 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 19:23:46.565397 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 19:23:46.565404 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 19:23:46.565409 kernel: kvm-guest: PV spinlocks enabled Apr 20 19:23:46.565415 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 19:23:46.565421 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:23:46.565427 kernel: random: crng init done Apr 20 19:23:46.565433 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 19:23:46.565440 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 19:23:46.565446 kernel: Fallback order for Node 0: 0 Apr 20 19:23:46.565452 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 20 19:23:46.565457 kernel: Policy zone: DMA32 Apr 20 19:23:46.565463 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 19:23:46.565468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 19:23:46.565474 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 19:23:46.565480 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 19:23:46.565487 kernel: Dynamic Preempt: voluntary Apr 20 19:23:46.565492 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 19:23:46.565499 kernel: rcu: RCU event tracing is enabled. Apr 20 19:23:46.565504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 19:23:46.565510 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 19:23:46.565516 kernel: Rude variant of Tasks RCU enabled. Apr 20 19:23:46.565521 kernel: Tracing variant of Tasks RCU enabled. Apr 20 19:23:46.565528 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 19:23:46.565534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 19:23:46.565539 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:23:46.565545 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:23:46.565551 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:23:46.565557 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 19:23:46.565562 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 19:23:46.565569 kernel: Console: colour VGA+ 80x25 Apr 20 19:23:46.565580 kernel: printk: legacy console [ttyS0] enabled Apr 20 19:23:46.565586 kernel: ACPI: Core revision 20240827 Apr 20 19:23:46.565593 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 19:23:46.565599 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 19:23:46.565605 kernel: x2apic enabled Apr 20 19:23:46.565611 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 19:23:46.565617 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 19:23:46.565623 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 19:23:46.565630 kernel: kvm-guest: setup PV IPIs Apr 20 19:23:46.565636 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 19:23:46.565642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:23:46.565648 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 19:23:46.565656 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 19:23:46.565662 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 19:23:46.565668 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 19:23:46.565674 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 19:23:46.565680 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 19:23:46.565686 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 19:23:46.565692 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 19:23:46.565699 kernel: RETBleed: Vulnerable Apr 20 19:23:46.565705 kernel: Speculative Store Bypass: Vulnerable Apr 20 19:23:46.565712 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 19:23:46.565718 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 19:23:46.565724 kernel: active return thunk: its_return_thunk Apr 20 19:23:46.565730 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 19:23:46.565735 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 19:23:46.565743 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 19:23:46.565749 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 19:23:46.565755 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 19:23:46.565761 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 19:23:46.565767 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 19:23:46.565773 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 19:23:46.565779 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 19:23:46.565786 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 19:23:46.565792 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 19:23:46.565798 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 19:23:46.565805 kernel: Freeing SMP alternatives memory: 32K Apr 20 19:23:46.565811 kernel: pid_max: default: 32768 minimum: 301 Apr 20 19:23:46.565816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 19:23:46.565822 kernel: landlock: Up and running. Apr 20 19:23:46.565828 kernel: SELinux: Initializing. Apr 20 19:23:46.565836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:23:46.565842 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:23:46.565848 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 19:23:46.565854 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 19:23:46.565860 kernel: signal: max sigframe size: 3632 Apr 20 19:23:46.565866 kernel: rcu: Hierarchical SRCU implementation. Apr 20 19:23:46.565872 kernel: rcu: Max phase no-delay instances is 400. Apr 20 19:23:46.565879 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 19:23:46.565885 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 19:23:46.565891 kernel: smp: Bringing up secondary CPUs ... Apr 20 19:23:46.565985 kernel: smpboot: x86: Booting SMP configuration: Apr 20 19:23:46.565992 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 19:23:46.565999 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 19:23:46.566005 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 19:23:46.566244 kernel: Memory: 2444328K/2571752K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 121532K reserved, 0K cma-reserved) Apr 20 19:23:46.566252 kernel: devtmpfs: initialized Apr 20 19:23:46.566258 kernel: x86/mm: Memory block size: 128MB Apr 20 19:23:46.566264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 19:23:46.566270 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 19:23:46.566276 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 19:23:46.566282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 19:23:46.566290 kernel: audit: initializing netlink subsys (disabled) Apr 20 19:23:46.566297 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 19:23:46.566303 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 19:23:46.566309 kernel: audit: type=2000 audit(1776713016.511:1): state=initialized audit_enabled=0 res=1 Apr 20 19:23:46.566315 kernel: cpuidle: using governor menu Apr 20 19:23:46.566321 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 19:23:46.566327 kernel: dca service started, version 1.12.1 Apr 20 19:23:46.566335 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 20 19:23:46.566341 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 20 19:23:46.566347 kernel: PCI: Using configuration type 1 for base access Apr 20 19:23:46.566353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 19:23:46.566359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 19:23:46.566365 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 19:23:46.566371 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 19:23:46.566379 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 19:23:46.566385 kernel: ACPI: Added _OSI(Module Device) Apr 20 19:23:46.566391 kernel: ACPI: Added _OSI(Processor Device) Apr 20 19:23:46.566396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 19:23:46.566403 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 19:23:46.566408 kernel: ACPI: Interpreter enabled Apr 20 19:23:46.566415 kernel: ACPI: PM: (supports S0 S3 S5) Apr 20 19:23:46.566422 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 19:23:46.566428 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 19:23:46.566433 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 19:23:46.566439 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 19:23:46.566445 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 19:23:46.566696 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 19:23:46.566797 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 19:23:46.566891 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 19:23:46.566928 kernel: PCI host bridge to bus 0000:00 Apr 20 19:23:46.567412 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 19:23:46.567539 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 19:23:46.567642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 19:23:46.567750 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 20 19:23:46.567861 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 20 19:23:46.568459 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 20 19:23:46.568592 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 19:23:46.568776 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 19:23:46.569384 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 19:23:46.569586 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 20 19:23:46.569729 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 20 19:23:46.569871 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 20 19:23:46.570393 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 19:23:46.571379 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 19:23:46.571566 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 20 19:23:46.571711 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 20 19:23:46.571855 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 20 19:23:46.572413 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 19:23:46.572572 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 20 19:23:46.572719 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 20 19:23:46.572867 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 20 19:23:46.573457 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 19:23:46.573606 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 20 19:23:46.573748 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 20 19:23:46.573889 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 20 19:23:46.574249 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 20 19:23:46.574372 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 19:23:46.574461 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 19:23:46.574562 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 19:23:46.574652 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 20 19:23:46.574741 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 20 19:23:46.574838 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 19:23:46.575428 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 20 19:23:46.575451 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 19:23:46.575462 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 19:23:46.575472 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 19:23:46.575482 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 19:23:46.575492 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 19:23:46.575507 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 19:23:46.575517 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 19:23:46.575527 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 19:23:46.575537 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 19:23:46.575547 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 19:23:46.575557 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 19:23:46.575567 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 19:23:46.575580 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 19:23:46.575592 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 19:23:46.575604 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 19:23:46.575616 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 19:23:46.575628 kernel: iommu: Default domain type: Translated Apr 20 19:23:46.575640 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 19:23:46.575651 kernel: PCI: Using ACPI for IRQ routing Apr 20 19:23:46.575665 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 19:23:46.575676 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 20 19:23:46.575688 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 20 19:23:46.575846 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 19:23:46.576127 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 19:23:46.576271 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 19:23:46.576288 kernel: vgaarb: loaded Apr 20 19:23:46.576300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 19:23:46.576312 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 19:23:46.576324 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 19:23:46.576336 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 19:23:46.576347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 19:23:46.576359 kernel: pnp: PnP ACPI init Apr 20 19:23:46.576545 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 20 19:23:46.576562 kernel: pnp: PnP ACPI: found 6 devices Apr 20 19:23:46.576574 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 19:23:46.576586 kernel: NET: Registered PF_INET protocol family Apr 20 19:23:46.576598 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 19:23:46.576610 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 19:23:46.576622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 19:23:46.576635 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 19:23:46.576647 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 19:23:46.576659 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 19:23:46.576671 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:23:46.576682 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:23:46.576694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 19:23:46.576706 kernel: NET: Registered PF_XDP protocol family Apr 20 19:23:46.576843 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 19:23:46.577415 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 19:23:46.577552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 19:23:46.577681 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 20 19:23:46.577810 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 20 19:23:46.577968 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 20 19:23:46.577983 kernel: PCI: CLS 0 bytes, default 64 Apr 20 19:23:46.577999 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 19:23:46.578266 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:23:46.578284 kernel: Initialise system trusted keyrings Apr 20 19:23:46.578297 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 19:23:46.578308 kernel: Key type asymmetric registered Apr 20 19:23:46.578320 kernel: Asymmetric key parser 'x509' registered Apr 20 19:23:46.578332 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 19:23:46.578352 kernel: io scheduler mq-deadline registered Apr 20 19:23:46.578364 kernel: io scheduler kyber registered Apr 20 19:23:46.578376 kernel: io scheduler bfq registered Apr 20 19:23:46.578386 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 19:23:46.578397 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 19:23:46.578407 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 19:23:46.578417 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 19:23:46.578428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 19:23:46.578438 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 19:23:46.578448 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 19:23:46.578458 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 19:23:46.578467 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 19:23:46.578810 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 19:23:46.581363 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 19:23:46.581390 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 20 19:23:46.581522 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T19:23:40 UTC (1776713020) Apr 20 19:23:46.581653 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 20 19:23:46.581667 kernel: intel_pstate: CPU model not supported Apr 20 19:23:46.581678 kernel: NET: Registered PF_INET6 protocol family Apr 20 19:23:46.581690 kernel: Segment Routing with IPv6 Apr 20 19:23:46.581703 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 19:23:46.581715 kernel: NET: Registered PF_PACKET protocol family Apr 20 19:23:46.581727 kernel: Key type dns_resolver registered Apr 20 19:23:46.581739 kernel: IPI shorthand broadcast: enabled Apr 20 19:23:46.581750 kernel: sched_clock: Marking stable (3741015937, 566227835)->(4855933349, -548689577) Apr 20 19:23:46.581762 kernel: registered taskstats version 1 Apr 20 19:23:46.581773 kernel: Loading compiled-in X.509 certificates Apr 20 19:23:46.581787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 19:23:46.581799 kernel: Demotion targets for Node 0: null Apr 20 19:23:46.581811 kernel: Key type .fscrypt registered Apr 20 19:23:46.581822 kernel: Key type fscrypt-provisioning registered Apr 20 19:23:46.581834 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 19:23:46.581846 kernel: ima: Allocated hash algorithm: sha1 Apr 20 19:23:46.581858 kernel: ima: No architecture policies found Apr 20 19:23:46.581869 kernel: clk: Disabling unused clocks Apr 20 19:23:46.581882 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 19:23:46.581937 kernel: Write protecting the kernel read-only data: 47104k Apr 20 19:23:46.581949 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 19:23:46.581962 kernel: Run /init as init process Apr 20 19:23:46.581973 kernel: with arguments: Apr 20 19:23:46.581985 kernel: /init Apr 20 19:23:46.581997 kernel: with environment: Apr 20 19:23:46.582203 kernel: HOME=/ Apr 20 19:23:46.582217 kernel: TERM=linux Apr 20 19:23:46.582229 kernel: SCSI subsystem initialized Apr 20 19:23:46.582241 kernel: libata version 3.00 loaded. Apr 20 19:23:46.582518 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 19:23:46.582534 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 19:23:46.582678 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 19:23:46.582834 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 19:23:46.583706 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 19:23:46.583930 kernel: scsi host0: ahci Apr 20 19:23:46.584592 kernel: scsi host1: ahci Apr 20 19:23:46.584744 kernel: scsi host2: ahci Apr 20 19:23:46.584886 kernel: scsi host3: ahci Apr 20 19:23:46.585436 kernel: scsi host4: ahci Apr 20 19:23:46.587541 kernel: scsi host5: ahci Apr 20 19:23:46.587571 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Apr 20 19:23:46.587583 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Apr 20 19:23:46.587595 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Apr 20 19:23:46.587610 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Apr 20 19:23:46.587621 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Apr 20 19:23:46.587632 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Apr 20 19:23:46.587643 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 19:23:46.587654 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 19:23:46.587665 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 19:23:46.587675 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 19:23:46.587688 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 19:23:46.587698 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 19:23:46.587709 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:23:46.587719 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 19:23:46.587730 kernel: ata3.00: applying bridge limits Apr 20 19:23:46.587740 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:23:46.587752 kernel: ata3.00: configured for UDMA/100 Apr 20 19:23:46.588512 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 19:23:46.588652 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 19:23:46.588764 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 19:23:46.588776 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 19:23:46.588785 kernel: GPT:16515071 != 27000831 Apr 20 19:23:46.588799 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 19:23:46.588808 kernel: GPT:16515071 != 27000831 Apr 20 19:23:46.588974 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 19:23:46.588987 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 19:23:46.588996 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 19:23:46.589005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 19:23:46.589484 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 19:23:46.589504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 19:23:46.589515 kernel: device-mapper: uevent: version 1.0.3 Apr 20 19:23:46.589526 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 19:23:46.589536 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 19:23:46.589551 kernel: raid6: avx512x4 gen() 15229 MB/s Apr 20 19:23:46.589563 kernel: raid6: avx512x2 gen() 24603 MB/s Apr 20 19:23:46.589574 kernel: raid6: avx512x1 gen() 23207 MB/s Apr 20 19:23:46.589584 kernel: raid6: avx2x4 gen() 8047 MB/s Apr 20 19:23:46.589594 kernel: raid6: avx2x2 gen() 17248 MB/s Apr 20 19:23:46.589604 kernel: raid6: avx2x1 gen() 14077 MB/s Apr 20 19:23:46.589614 kernel: raid6: using algorithm avx512x2 gen() 24603 MB/s Apr 20 19:23:46.589625 kernel: raid6: .... xor() 18024 MB/s, rmw enabled Apr 20 19:23:46.589637 kernel: raid6: using avx512x2 recovery algorithm Apr 20 19:23:46.589649 kernel: xor: automatically using best checksumming function avx Apr 20 19:23:46.589660 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 19:23:46.589671 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (183) Apr 20 19:23:46.589682 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 19:23:46.589693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:23:46.589703 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 19:23:46.589716 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 19:23:46.589726 kernel: loop: module loaded Apr 20 19:23:46.589737 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 19:23:46.589747 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 19:23:46.589760 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 19:23:46.589773 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 19:23:46.589785 systemd[1]: Successfully made /usr/ read-only. Apr 20 19:23:46.589797 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:23:46.589807 systemd[1]: Detected virtualization kvm. Apr 20 19:23:46.589817 systemd[1]: Detected architecture x86-64. Apr 20 19:23:46.589828 systemd[1]: Running in initrd. Apr 20 19:23:46.589838 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:23:46.589852 systemd[1]: No hostname configured, using default hostname. Apr 20 19:23:46.589862 systemd[1]: Hostname set to . Apr 20 19:23:46.589873 systemd[1]: Queued start job for default target initrd.target. Apr 20 19:23:46.589884 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 19:23:46.590120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:23:46.590135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:23:46.590156 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 19:23:46.590168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:23:46.590179 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 19:23:46.590190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 19:23:46.590201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:23:46.590213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:23:46.590226 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:23:46.590237 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:23:46.590248 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:23:46.590260 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:23:46.590271 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:23:46.590282 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:23:46.590293 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:23:46.590305 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:23:46.590316 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 19:23:46.590327 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 19:23:46.590338 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:23:46.590349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:23:46.590360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 19:23:46.590371 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:23:46.590385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 19:23:46.590397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 19:23:46.590408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:23:46.590419 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 19:23:46.590431 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 19:23:46.590442 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 19:23:46.590455 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:23:46.590468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:23:46.590479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:23:46.590491 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 19:23:46.590617 systemd-journald[323]: Collecting audit messages is enabled. Apr 20 19:23:46.590650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:23:46.590661 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 19:23:46.590675 kernel: audit: type=1130 audit(1776713026.578:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.590688 systemd-journald[323]: Journal started Apr 20 19:23:46.590711 systemd-journald[323]: Runtime Journal (/run/log/journal/4a58d7ffec33473b8b46f419360440dd) is 6M, max 48.1M, 42.1M free. Apr 20 19:23:46.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.599413 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:23:46.599444 kernel: audit: type=1130 audit(1776713026.597:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.618555 kernel: audit: type=1130 audit(1776713026.615:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:46.623617 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:23:46.636771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:23:46.777134 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 19:23:46.786887 systemd-modules-load[325]: Inserted module 'br_netfilter' Apr 20 19:23:46.998883 kernel: Bridge firewalling registered Apr 20 19:23:46.808589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:23:46.999745 kernel: hrtimer: interrupt took 11487837 ns Apr 20 19:23:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.010274 kernel: audit: type=1130 audit(1776713026.998:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.018521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:23:47.085608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:23:47.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.116173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 19:23:47.121518 kernel: audit: type=1130 audit(1776713027.097:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.128746 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 19:23:47.216340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:23:47.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.234283 kernel: audit: type=1130 audit(1776713027.216:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.239311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:23:47.262187 kernel: audit: type=1130 audit(1776713027.239:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.262637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:23:47.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.263000 audit: BPF prog-id=5 op=LOAD Apr 20 19:23:47.284272 kernel: audit: type=1130 audit(1776713027.263:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.268493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:23:47.288472 kernel: audit: type=1334 audit(1776713027.263:10): prog-id=5 op=LOAD Apr 20 19:23:47.274850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:23:47.299268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:23:47.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.315957 kernel: audit: type=1130 audit(1776713027.301:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.309391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 19:23:47.335524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:23:47.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.365939 dracut-cmdline[360]: dracut-109 Apr 20 19:23:47.386110 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:23:47.436482 systemd-resolved[351]: Positive Trust Anchors: Apr 20 19:23:47.436607 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:23:47.436612 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:23:47.436650 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:23:47.503577 systemd-resolved[351]: Defaulting to hostname 'linux'. Apr 20 19:23:47.510803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:23:47.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:47.512088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:23:47.787338 kernel: Loading iSCSI transport class v2.0-870. Apr 20 19:23:47.856109 kernel: iscsi: registered transport (tcp) Apr 20 19:23:47.919157 kernel: iscsi: registered transport (qla4xxx) Apr 20 19:23:47.919501 kernel: QLogic iSCSI HBA Driver Apr 20 19:23:48.074311 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:23:48.155199 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:23:48.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:48.159123 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:23:48.476865 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 19:23:48.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:48.488676 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 19:23:48.502294 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 19:23:48.678673 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:23:48.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:48.684000 audit: BPF prog-id=6 op=LOAD Apr 20 19:23:48.684000 audit: BPF prog-id=7 op=LOAD Apr 20 19:23:48.686589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:23:48.774140 systemd-udevd[595]: Using default interface naming scheme 'v258'. Apr 20 19:23:48.882640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:23:48.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:48.890288 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 19:23:48.972735 dracut-pre-trigger[664]: rd.md=0: removing MD RAID activation Apr 20 19:23:48.994673 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:23:48.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.009000 audit: BPF prog-id=8 op=LOAD Apr 20 19:23:49.012363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:23:49.091588 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:23:49.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.107766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:23:49.257001 systemd-networkd[723]: lo: Link UP Apr 20 19:23:49.257093 systemd-networkd[723]: lo: Gained carrier Apr 20 19:23:49.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.258734 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:23:49.264091 systemd[1]: Reached target network.target - Network. Apr 20 19:23:49.465602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:23:49.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.479323 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 19:23:49.629299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 19:23:49.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.661638 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 19:23:49.697737 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 19:23:49.720187 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 19:23:49.721595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:23:49.744606 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 19:23:49.757825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:23:49.768595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:23:49.779848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:23:49.789897 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 19:23:49.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.828832 kernel: AES CTR mode by8 optimization enabled Apr 20 19:23:49.805461 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 19:23:49.818573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:23:49.818824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:23:49.874620 disk-uuid[788]: Primary Header is updated. Apr 20 19:23:49.874620 disk-uuid[788]: Secondary Entries is updated. Apr 20 19:23:49.874620 disk-uuid[788]: Secondary Header is updated. Apr 20 19:23:49.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:49.822278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:23:49.832818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:23:49.833849 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:23:49.833853 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:23:49.841554 systemd-networkd[723]: eth0: Link UP Apr 20 19:23:49.865319 systemd-networkd[723]: eth0: Gained carrier Apr 20 19:23:49.865335 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:23:49.874736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:23:49.938438 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:23:50.405475 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 20 19:23:50.407354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:23:50.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:51.034980 disk-uuid[794]: Warning: The kernel is still using the old partition table. Apr 20 19:23:51.034980 disk-uuid[794]: The new table will be used at the next reboot or after you Apr 20 19:23:51.034980 disk-uuid[794]: run partprobe(8) or kpartx(8) Apr 20 19:23:51.034980 disk-uuid[794]: The operation has completed successfully. Apr 20 19:23:51.089412 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 19:23:51.090945 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 19:23:51.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:51.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:51.101182 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 19:23:51.256883 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 20 19:23:51.265631 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:23:51.265745 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:23:51.282156 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:23:51.282278 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:23:51.318695 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:23:51.336809 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 19:23:51.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:51.340448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 19:23:51.792301 ignition[916]: Ignition 2.24.0 Apr 20 19:23:51.792321 ignition[916]: Stage: fetch-offline Apr 20 19:23:51.792424 ignition[916]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:23:51.792433 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:23:51.792503 ignition[916]: parsed url from cmdline: "" Apr 20 19:23:51.792505 ignition[916]: no config URL provided Apr 20 19:23:51.792574 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 19:23:51.792582 ignition[916]: no config at "/usr/lib/ignition/user.ign" Apr 20 19:23:51.816470 systemd-networkd[723]: eth0: Gained IPv6LL Apr 20 19:23:51.792637 ignition[916]: op(1): [started] loading QEMU firmware config module Apr 20 19:23:51.792640 ignition[916]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 19:23:51.864985 ignition[916]: op(1): [finished] loading QEMU firmware config module Apr 20 19:23:52.078882 ignition[916]: parsing config with SHA512: fa324b03f17d63f73052a1f942bf114c0da80ead16682c1ed3ae3d35e58608f89b87e61043c641bcfc25ada40e210fd867b6ec51e6fec8491f3bb032bcd7c2fb Apr 20 19:23:52.102546 unknown[916]: fetched base config from "system" Apr 20 19:23:52.102563 unknown[916]: fetched user config from "qemu" Apr 20 19:23:52.105621 ignition[916]: fetch-offline: fetch-offline passed Apr 20 19:23:52.105742 ignition[916]: Ignition finished successfully Apr 20 19:23:52.134960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:23:52.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.185202 kernel: kauditd_printk_skb: 20 callbacks suppressed Apr 20 19:23:52.172532 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 19:23:52.194891 kernel: audit: type=1130 audit(1776713032.171:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.179692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 19:23:52.292369 ignition[926]: Ignition 2.24.0 Apr 20 19:23:52.292422 ignition[926]: Stage: kargs Apr 20 19:23:52.292661 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:23:52.292669 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:23:52.294250 ignition[926]: kargs: kargs passed Apr 20 19:23:52.294305 ignition[926]: Ignition finished successfully Apr 20 19:23:52.320667 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 19:23:52.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.329733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 19:23:52.350745 kernel: audit: type=1130 audit(1776713032.324:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.426661 ignition[933]: Ignition 2.24.0 Apr 20 19:23:52.426884 ignition[933]: Stage: disks Apr 20 19:23:52.427251 ignition[933]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:23:52.427260 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:23:52.439108 ignition[933]: disks: disks passed Apr 20 19:23:52.439168 ignition[933]: Ignition finished successfully Apr 20 19:23:52.467161 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 19:23:52.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.473576 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 19:23:52.488205 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 19:23:52.502094 kernel: audit: type=1130 audit(1776713032.467:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.499131 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:23:52.506220 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:23:52.523506 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:23:52.591244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 19:23:52.740782 systemd-fsck[944]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 19:23:52.800442 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 19:23:52.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.829837 kernel: audit: type=1130 audit(1776713032.809:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:52.819324 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 19:23:53.441633 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 19:23:53.444880 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 19:23:53.450702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 19:23:53.476489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:23:53.483207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 19:23:53.504771 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 19:23:53.504903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 19:23:53.505372 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:23:53.538578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 19:23:53.632589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (952) Apr 20 19:23:53.617666 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 19:23:53.645901 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:23:53.645969 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:23:53.677211 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:23:53.677316 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:23:53.692366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:23:54.580501 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 19:23:54.591203 kernel: loop1: p1 p2 p3 Apr 20 19:23:54.728316 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:23:54.728456 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:23:54.733256 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:23:54.737120 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:23:54.737276 systemd-confext[1042]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:23:54.824222 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:23:55.283448 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:23:55.401247 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 19:23:55.409620 kernel: loop2: p1 p2 p3 Apr 20 19:23:55.591246 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:23:55.591343 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:23:55.591388 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:23:55.599344 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:23:55.599408 (sd-merge)[1052]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:23:55.620210 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:23:56.203481 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:23:56.205751 (sd-merge)[1052]: Using extensions '00-flatcar-default.raw'. Apr 20 19:23:56.220347 (sd-merge)[1052]: Merged extensions into '/sysroot/etc'. Apr 20 19:23:56.288525 initrd-setup-root[1059]: /etc 00-flatcar-default Mon 2026-04-20 19:23:47 UTC Apr 20 19:23:56.316386 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 19:23:56.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.341243 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 19:23:56.366519 kernel: audit: type=1130 audit(1776713036.332:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.373682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 19:23:56.419869 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 19:23:56.432712 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:23:56.556145 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 19:23:56.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.585338 kernel: audit: type=1130 audit(1776713036.560:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.594309 ignition[1068]: INFO : Ignition 2.24.0 Apr 20 19:23:56.599564 ignition[1068]: INFO : Stage: mount Apr 20 19:23:56.599564 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:23:56.599564 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:23:56.616461 ignition[1068]: INFO : mount: mount passed Apr 20 19:23:56.616461 ignition[1068]: INFO : Ignition finished successfully Apr 20 19:23:56.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.664732 kernel: audit: type=1130 audit(1776713036.621:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:56.620242 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 19:23:56.637155 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 19:23:56.729183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:23:56.821341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1080) Apr 20 19:23:56.831220 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:23:56.831334 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:23:56.903262 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:23:56.903355 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:23:56.922827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:23:57.099980 ignition[1097]: INFO : Ignition 2.24.0 Apr 20 19:23:57.099980 ignition[1097]: INFO : Stage: files Apr 20 19:23:57.099980 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:23:57.099980 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:23:57.137726 ignition[1097]: DEBUG : files: compiled without relabeling support, skipping Apr 20 19:23:57.201235 ignition[1097]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 19:23:57.201235 ignition[1097]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 19:23:57.218150 ignition[1097]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 19:23:57.231704 ignition[1097]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 19:23:57.246508 unknown[1097]: wrote ssh authorized keys file for user: core Apr 20 19:23:57.252537 ignition[1097]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 19:23:57.258358 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:23:57.258358 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 19:23:57.526474 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 19:23:57.891784 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:23:57.891784 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:23:57.922554 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 20 19:23:58.462257 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 19:23:59.460541 ignition[1097]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:23:59.460541 ignition[1097]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 19:23:59.492550 ignition[1097]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 19:23:59.914737 ignition[1097]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:23:59.975668 ignition[1097]: INFO : files: files passed Apr 20 19:23:59.975668 ignition[1097]: INFO : Ignition finished successfully Apr 20 19:24:00.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:00.034881 kernel: audit: type=1130 audit(1776713040.005:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:23:59.980628 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 19:24:00.014605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 19:24:00.213772 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 19:24:00.288263 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 19:24:00.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:00.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:00.331817 kernel: audit: type=1130 audit(1776713040.301:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:00.288592 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 19:24:00.341676 kernel: audit: type=1131 audit(1776713040.308:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:00.493271 initrd-setup-root-after-ignition[1129]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 19:24:00.523699 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:24:00.523699 initrd-setup-root-after-ignition[1131]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:24:00.564515 initrd-setup-root-after-ignition[1135]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:24:00.621204 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 19:24:00.663510 kernel: loop3: p1 p2 p3 Apr 20 19:24:00.863289 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:00.863350 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:00.863367 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:00.865238 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:00.868360 systemd-confext[1137]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 19:24:00.882496 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:01.401565 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:24:01.522937 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:24:01.539857 kernel: loop4: p1 p2 p3 Apr 20 19:24:01.743319 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:01.743452 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:01.743469 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:01.749230 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:01.749773 (sd-merge)[1148]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 19:24:01.786299 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:02.305166 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:24:02.308785 (sd-merge)[1148]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:24:02.377937 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 19:24:02.419141 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:24:02.466487 kernel: loop4: p1 p2 p3 Apr 20 19:24:02.741162 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:02.776263 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:02.776504 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:02.783288 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:02.783436 systemd-sysext[1156]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 19:24:02.808790 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:03.260243 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:24:03.348863 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 19:24:03.723386 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 19:24:03.788938 kernel: loop6: p1 p2 p3 Apr 20 19:24:04.012922 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:04.013138 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:04.019401 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:04.019585 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:04.023307 systemd-sysext[1156]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 19:24:04.063206 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:04.695806 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:24:04.810124 kernel: loop7: detected capacity change from 0 to 178200 Apr 20 19:24:04.821002 kernel: loop7: p1 p2 p3 Apr 20 19:24:04.990570 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:04.990663 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:04.990675 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:04.996208 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:04.998858 (sd-merge)[1175]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 19:24:05.021386 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:05.384850 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:24:05.411938 kernel: loop1: detected capacity change from 0 to 217752 Apr 20 19:24:05.572849 kernel: loop3: detected capacity change from 0 to 378016 Apr 20 19:24:05.618208 kernel: loop3: p1 p2 p3 Apr 20 19:24:05.794174 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:05.794296 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:05.795333 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:05.808959 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:05.811399 (sd-merge)[1175]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:3) failed: Invalid argument Apr 20 19:24:05.829493 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:06.360382 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 19:24:06.368672 (sd-merge)[1175]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.35.1-x86-64.raw'. Apr 20 19:24:06.377304 (sd-merge)[1175]: Merged extensions into '/sysroot/usr'. Apr 20 19:24:06.387797 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:24:06.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.437321 kernel: audit: type=1130 audit(1776713046.394:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.397215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 19:24:06.504920 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 19:24:06.638455 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 19:24:06.641503 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 19:24:06.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.706637 kernel: audit: type=1130 audit(1776713046.673:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.675153 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 19:24:06.717210 kernel: audit: type=1131 audit(1776713046.674:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.678326 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 19:24:06.729358 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 19:24:06.739778 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 19:24:06.762714 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 19:24:06.961290 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:24:06.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.986196 kernel: audit: type=1130 audit(1776713046.962:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:06.971954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 19:24:07.070256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:24:07.079630 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:24:07.084660 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 19:24:07.099615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 19:24:07.099906 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:24:07.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.134947 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 19:24:07.188392 systemd[1]: Stopped target basic.target - Basic System. Apr 20 19:24:07.199866 kernel: audit: type=1131 audit(1776713047.132:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.206953 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 19:24:07.217776 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:24:07.227188 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 19:24:07.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.327937 kernel: audit: type=1131 audit(1776713047.261:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.234413 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:24:07.248316 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 19:24:07.254549 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:24:07.263408 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 19:24:07.263604 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 19:24:07.263631 systemd[1]: Stopped target swap.target - Swaps. Apr 20 19:24:07.263667 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 19:24:07.263811 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:24:07.263902 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:24:07.263937 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:24:07.264206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 19:24:07.269427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:24:07.337530 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 19:24:07.338435 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 19:24:07.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.532340 kernel: audit: type=1131 audit(1776713047.493:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.494439 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 19:24:07.561727 kernel: audit: type=1131 audit(1776713047.505:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.494575 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:24:07.505608 systemd[1]: Stopped target paths.target - Path Units. Apr 20 19:24:07.532905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 19:24:07.533494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:24:07.564524 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 19:24:07.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.729585 kernel: audit: type=1131 audit(1776713047.674:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.574252 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 19:24:07.587484 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 19:24:07.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.587706 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:24:07.604908 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 19:24:07.774876 kernel: audit: type=1131 audit(1776713047.744:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.605253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:24:07.605572 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 19:24:07.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.605599 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:24:07.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.611925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 19:24:07.613233 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:24:07.707266 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 19:24:07.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.707344 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 19:24:07.744956 systemd[1]: ignition-files.service: Consumed 1.060s CPU time. Apr 20 19:24:07.751541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 19:24:07.770477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 19:24:07.782313 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 19:24:07.782424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:24:07.799464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 19:24:07.799595 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:24:07.799715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 19:24:07.799783 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:24:07.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:07.811759 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 19:24:07.812352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 19:24:07.938815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 19:24:07.965362 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 19:24:07.966241 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 19:24:08.108267 ignition[1204]: INFO : Ignition 2.24.0 Apr 20 19:24:08.113403 ignition[1204]: INFO : Stage: umount Apr 20 19:24:08.125101 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:24:08.132749 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:24:08.149778 ignition[1204]: INFO : umount: umount passed Apr 20 19:24:08.149778 ignition[1204]: INFO : Ignition finished successfully Apr 20 19:24:08.162914 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 19:24:08.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.163592 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 19:24:08.170579 systemd[1]: Stopped target network.target - Network. Apr 20 19:24:08.181818 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 19:24:08.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.182290 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 19:24:08.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.187585 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 19:24:08.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.187722 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 19:24:08.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.196147 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 19:24:08.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.196205 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 19:24:08.206892 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 19:24:08.207409 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 19:24:08.218553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 19:24:08.218673 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 19:24:08.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.236893 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 19:24:08.352000 audit: BPF prog-id=5 op=UNLOAD Apr 20 19:24:08.300733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 19:24:08.331687 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 19:24:08.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.332307 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 19:24:08.363242 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 19:24:08.397000 audit: BPF prog-id=8 op=UNLOAD Apr 20 19:24:08.364236 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 19:24:08.398741 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 19:24:08.405679 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 19:24:08.405973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:24:08.433826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 19:24:08.519393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 19:24:08.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.519557 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:24:08.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.528377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 19:24:08.528818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:24:08.548684 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 19:24:08.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.548805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 19:24:08.580661 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:24:08.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.717304 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 19:24:08.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.718101 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:24:08.726722 systemd[1]: systemd-udevd.service: Consumed 2.726s CPU time. Apr 20 19:24:08.729417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 19:24:08.729649 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 19:24:08.735811 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 19:24:08.735872 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:24:08.736557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 19:24:08.736642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 19:24:08.736777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 19:24:08.736808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:24:08.742297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 19:24:08.746570 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 19:24:08.746729 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:24:08.746844 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 19:24:08.746875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:24:08.746932 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 19:24:08.746954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:24:08.747481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 19:24:08.747521 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:24:08.747590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:24:08.747622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:24:08.808790 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 19:24:08.831559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 19:24:08.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.981923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 19:24:08.982312 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 19:24:08.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:08.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:09.006644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 19:24:09.029317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 19:24:09.120087 systemd[1]: Switching root. Apr 20 19:24:09.221306 systemd-journald[323]: Journal stopped Apr 20 19:24:16.338308 systemd-journald[323]: Received SIGTERM from PID 1 (systemd). Apr 20 19:24:16.340499 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 19:24:16.340876 kernel: SELinux: policy capability open_perms=1 Apr 20 19:24:16.340897 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 19:24:16.340917 kernel: SELinux: policy capability always_check_network=0 Apr 20 19:24:16.340931 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 19:24:16.340945 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 19:24:16.340965 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 19:24:16.340980 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 19:24:16.340993 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 19:24:16.342616 systemd[1]: Successfully loaded SELinux policy in 242.229ms. Apr 20 19:24:16.342656 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.052ms. Apr 20 19:24:16.342671 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:24:16.342685 systemd[1]: Detected virtualization kvm. Apr 20 19:24:16.342700 systemd[1]: Detected architecture x86-64. Apr 20 19:24:16.342712 systemd[1]: Detected first boot. Apr 20 19:24:16.342725 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:24:16.342741 zram_generator::config[1252]: No configuration found. Apr 20 19:24:16.342756 kernel: Guest personality initialized and is inactive Apr 20 19:24:16.342769 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 19:24:16.342781 kernel: Initialized host personality Apr 20 19:24:16.342795 kernel: NET: Registered PF_VSOCK protocol family Apr 20 19:24:16.342807 systemd-ssh-generator[1248]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:24:16.342822 (sd-exec-[1233]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:24:16.342928 systemd[1]: Applying preset policy. Apr 20 19:24:16.342943 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 19:24:16.342957 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 19:24:16.342971 systemd[1]: Populated /etc with preset unit settings. Apr 20 19:24:16.342984 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:24:16.342997 kernel: kauditd_printk_skb: 35 callbacks suppressed Apr 20 19:24:16.343386 kernel: audit: type=1334 audit(1776713054.327:87): prog-id=10 op=LOAD Apr 20 19:24:16.343485 kernel: audit: type=1334 audit(1776713054.327:88): prog-id=2 op=UNLOAD Apr 20 19:24:16.343500 kernel: audit: type=1334 audit(1776713054.327:89): prog-id=11 op=LOAD Apr 20 19:24:16.343512 kernel: audit: type=1334 audit(1776713054.327:90): prog-id=12 op=LOAD Apr 20 19:24:16.343525 kernel: audit: type=1334 audit(1776713054.327:91): prog-id=3 op=UNLOAD Apr 20 19:24:16.343537 kernel: audit: type=1334 audit(1776713054.327:92): prog-id=4 op=UNLOAD Apr 20 19:24:16.343552 kernel: audit: type=1131 audit(1776713054.346:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.343569 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 19:24:16.343584 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 19:24:16.343597 kernel: audit: type=1130 audit(1776713054.378:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.347282 kernel: audit: type=1131 audit(1776713054.378:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.347474 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 19:24:16.347496 kernel: audit: type=1334 audit(1776713054.390:96): prog-id=10 op=UNLOAD Apr 20 19:24:16.347511 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 19:24:16.347525 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 19:24:16.347539 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 19:24:16.347552 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 19:24:16.347566 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 19:24:16.347582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 19:24:16.347597 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 19:24:16.347611 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 19:24:16.347626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:24:16.347640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:24:16.347652 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 19:24:16.347665 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 19:24:16.347677 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 19:24:16.347693 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:24:16.347706 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 19:24:16.347718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:24:16.347732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:24:16.347744 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 19:24:16.347758 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 19:24:16.347772 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 19:24:16.347787 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 19:24:16.347800 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 19:24:16.347814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:24:16.347827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:24:16.352359 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 19:24:16.352500 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 19:24:16.352519 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:24:16.352541 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:24:16.352558 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 19:24:16.352575 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:24:16.352592 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 19:24:16.352609 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 19:24:16.352627 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 19:24:16.352658 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:24:16.352679 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 19:24:16.352696 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 19:24:16.352712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:24:16.352730 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 19:24:16.352747 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 19:24:16.352763 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 19:24:16.352779 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 19:24:16.352798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:24:16.352815 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 19:24:16.353296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 19:24:16.353419 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 19:24:16.353435 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 19:24:16.353451 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 19:24:16.353467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:24:16.353523 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 19:24:16.353539 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 19:24:16.353555 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 19:24:16.353570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 19:24:16.353587 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 19:24:16.353603 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:24:16.353622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 19:24:16.353638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 19:24:16.353654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:24:16.353670 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 19:24:16.353688 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 19:24:16.353704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 19:24:16.353720 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 19:24:16.353736 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 19:24:16.353752 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 19:24:16.353768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 19:24:16.353785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 19:24:16.353800 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 19:24:16.353815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 19:24:16.353831 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 19:24:16.353846 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 19:24:16.353862 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:24:16.353877 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:24:16.353897 kernel: fuse: init (API version 7.41) Apr 20 19:24:16.353912 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:24:16.353928 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 19:24:16.353943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 19:24:16.353960 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 19:24:16.353978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:24:16.353994 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:24:16.354440 kernel: ACPI: bus type drm_connector registered Apr 20 19:24:16.354562 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 19:24:16.354575 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 19:24:16.354629 systemd-journald[1324]: Collecting audit messages is enabled. Apr 20 19:24:16.354659 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 19:24:16.354674 systemd-journald[1324]: Journal started Apr 20 19:24:16.354702 systemd-journald[1324]: Runtime Journal (/run/log/journal/4a58d7ffec33473b8b46f419360440dd) is 6M, max 48.1M, 42.1M free. Apr 20 19:24:15.216000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 19:24:16.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.080000 audit: BPF prog-id=12 op=UNLOAD Apr 20 19:24:16.080000 audit: BPF prog-id=11 op=UNLOAD Apr 20 19:24:16.081000 audit: BPF prog-id=13 op=LOAD Apr 20 19:24:16.083000 audit: BPF prog-id=14 op=LOAD Apr 20 19:24:16.084000 audit: BPF prog-id=15 op=LOAD Apr 20 19:24:16.334000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 19:24:16.334000 audit[1324]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd5c5f3650 a2=4000 a3=0 items=0 ppid=1 pid=1324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:24:16.334000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 19:24:14.276584 systemd[1]: Queued start job for default target multi-user.target. Apr 20 19:24:14.335555 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 19:24:14.345545 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 19:24:14.346178 systemd[1]: systemd-journald.service: Consumed 1.834s CPU time. Apr 20 19:24:16.369405 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:24:16.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.385469 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 19:24:16.399435 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 19:24:16.405812 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 19:24:16.412496 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 19:24:16.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.418286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:24:16.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.426763 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 19:24:16.427946 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 19:24:16.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.435837 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 19:24:16.436780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 19:24:16.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.465637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 19:24:16.465945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 19:24:16.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.473933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:24:16.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.481528 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:24:16.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.489361 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 19:24:16.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.499816 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 19:24:16.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.569637 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:24:16.579631 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 19:24:16.595514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 19:24:16.605247 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 19:24:16.611998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 19:24:16.613824 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:24:16.619961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 19:24:16.650632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 19:24:16.655638 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 19:24:16.668410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 19:24:16.688256 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 19:24:16.701712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 19:24:16.715820 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 19:24:16.726417 systemd-journald[1324]: Time spent on flushing to /var/log/journal/4a58d7ffec33473b8b46f419360440dd is 125.290ms for 1209 entries. Apr 20 19:24:16.726417 systemd-journald[1324]: System Journal (/var/log/journal/4a58d7ffec33473b8b46f419360440dd) is 8M, max 163.5M, 155.5M free. Apr 20 19:24:16.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.954435 systemd-journald[1324]: Received client request to flush runtime journal. Apr 20 19:24:16.731291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:24:16.742414 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:24:16.767579 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 19:24:16.787631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:24:16.805599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 19:24:16.817752 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 19:24:16.906465 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 19:24:16.919632 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 19:24:16.951172 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 19:24:16.962485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 19:24:16.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.981561 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 19:24:16.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.996465 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:24:16.994503 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Apr 20 19:24:17.007927 kernel: loop4: p1 p2 p3 Apr 20 19:24:16.994518 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Apr 20 19:24:16.996906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:24:17.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.026237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:24:17.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.082335 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 19:24:17.172189 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 19:24:17.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.309754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 19:24:17.311699 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:17.311961 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:17.311984 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:17.321356 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:17.329530 systemd-confext[1371]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:24:17.361730 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:17.362931 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 19:24:17.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.377000 audit: BPF prog-id=16 op=LOAD Apr 20 19:24:17.378000 audit: BPF prog-id=17 op=LOAD Apr 20 19:24:17.380000 audit: BPF prog-id=18 op=LOAD Apr 20 19:24:17.385384 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 19:24:17.394000 audit: BPF prog-id=19 op=LOAD Apr 20 19:24:17.395592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:24:17.402000 audit: BPF prog-id=20 op=LOAD Apr 20 19:24:17.408567 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 19:24:17.420573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:24:17.492638 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 19:24:17.517000 audit: BPF prog-id=21 op=LOAD Apr 20 19:24:17.517000 audit: BPF prog-id=22 op=LOAD Apr 20 19:24:17.517000 audit: BPF prog-id=23 op=LOAD Apr 20 19:24:17.527714 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 19:24:17.578203 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 19:24:17.580315 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 20 19:24:17.580366 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 20 19:24:17.580412 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 19:24:17.590559 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 19:24:17.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.602874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:24:17.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.627000 audit: BPF prog-id=24 op=LOAD Apr 20 19:24:17.628000 audit: BPF prog-id=25 op=LOAD Apr 20 19:24:17.628000 audit: BPF prog-id=26 op=LOAD Apr 20 19:24:17.639087 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 19:24:17.890702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 19:24:17.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.929847 systemd-nsresourced[1403]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 19:24:17.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:17.934256 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 19:24:18.184525 systemd-oomd[1394]: No swap; memory pressure usage will be degraded Apr 20 19:24:18.197459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 19:24:18.210652 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 19:24:18.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:18.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:18.226628 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 19:24:18.311995 systemd-resolved[1395]: Positive Trust Anchors: Apr 20 19:24:18.312409 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:24:18.312430 systemd-resolved[1395]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:24:18.312460 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:24:18.325665 systemd-resolved[1395]: Defaulting to hostname 'linux'. Apr 20 19:24:18.335470 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:24:18.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:18.367785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:24:32.399634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 19:24:32.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:32.426618 kernel: kauditd_printk_skb: 51 callbacks suppressed Apr 20 19:24:32.438860 kernel: audit: type=1130 audit(1776713072.417:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:32.452000 audit: BPF prog-id=7 op=UNLOAD Apr 20 19:24:32.452000 audit: BPF prog-id=6 op=UNLOAD Apr 20 19:24:32.463353 kernel: audit: type=1334 audit(1776713072.452:147): prog-id=7 op=UNLOAD Apr 20 19:24:32.463576 kernel: audit: type=1334 audit(1776713072.452:148): prog-id=6 op=UNLOAD Apr 20 19:24:32.459000 audit: BPF prog-id=27 op=LOAD Apr 20 19:24:32.467298 kernel: audit: type=1334 audit(1776713072.459:149): prog-id=27 op=LOAD Apr 20 19:24:32.463000 audit: BPF prog-id=28 op=LOAD Apr 20 19:24:32.472167 kernel: audit: type=1334 audit(1776713072.463:150): prog-id=28 op=LOAD Apr 20 19:24:32.474311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:24:32.896848 systemd-udevd[1424]: Using default interface naming scheme 'v258'. Apr 20 19:24:33.676494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:24:33.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:33.687000 audit: BPF prog-id=29 op=LOAD Apr 20 19:24:33.697670 kernel: audit: type=1130 audit(1776713073.684:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:33.695722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:24:33.697886 kernel: audit: type=1334 audit(1776713073.687:152): prog-id=29 op=LOAD Apr 20 19:24:34.235531 systemd-networkd[1426]: lo: Link UP Apr 20 19:24:34.235848 systemd-networkd[1426]: lo: Gained carrier Apr 20 19:24:34.255836 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:24:34.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:34.260848 systemd[1]: Reached target network.target - Network. Apr 20 19:24:34.273925 kernel: audit: type=1130 audit(1776713074.260:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:34.280438 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 19:24:34.298808 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 19:24:34.439924 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 19:24:34.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:34.459171 kernel: audit: type=1130 audit(1776713074.446:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:34.514494 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 19:24:34.686858 systemd-networkd[1426]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:24:34.688118 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:24:34.695655 systemd-networkd[1426]: eth0: Link UP Apr 20 19:24:34.698129 systemd-networkd[1426]: eth0: Gained carrier Apr 20 19:24:34.698158 systemd-networkd[1426]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:24:34.725175 systemd-networkd[1426]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:24:34.731837 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Apr 20 19:24:35.286537 systemd-resolved[1395]: Clock change detected. Flushing caches. Apr 20 19:24:35.286954 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 19:24:35.287003 systemd-timesyncd[1396]: Initial clock synchronization to Mon 2026-04-20 19:24:35.285809 UTC. Apr 20 19:24:35.331993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:24:35.344979 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 19:24:35.572209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 20 19:24:35.637277 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 19:24:35.643582 kernel: ACPI: button: Power Button [PWRF] Apr 20 19:24:35.720974 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 19:24:35.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:35.742011 kernel: audit: type=1130 audit(1776713075.725:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:35.786304 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 19:24:35.858469 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 19:24:36.226571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:24:36.393952 systemd-networkd[1426]: eth0: Gained IPv6LL Apr 20 19:24:36.421707 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 19:24:36.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:36.437513 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 19:24:36.771411 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:24:37.056034 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:24:37.056244 kernel: loop4: p1 p2 p3 Apr 20 19:24:37.637446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:24:37.661878 (sd-merge)[1489]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:24:37.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:37.709019 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:37.709698 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:37.709782 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:37.709798 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:37.709821 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:38.136317 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:24:38.151748 (sd-merge)[1489]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:24:38.173846 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 19:24:38.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:38.267002 kernel: kauditd_printk_skb: 2 callbacks suppressed Apr 20 19:24:38.267168 kernel: audit: type=1130 audit(1776713078.178:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:38.288886 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 19:24:38.308490 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:24:38.352196 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 19:24:38.359394 kernel: loop4: p1 p2 p3 Apr 20 19:24:38.736616 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:38.737372 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:38.737428 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:38.737442 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:38.746811 systemd-sysext[1499]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:24:38.845441 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:39.554279 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:24:39.890237 kernel: loop4: detected capacity change from 0 to 217752 Apr 20 19:24:40.734181 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:24:40.757280 kernel: loop4: p1 p2 p3 Apr 20 19:24:40.803344 kernel: loop4: p1 p2 p3 Apr 20 19:24:40.987603 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:40.988646 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:40.988940 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:41.008802 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:41.008010 systemd-sysext[1499]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:24:41.025182 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:41.313566 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:24:41.569770 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 19:24:41.585616 kernel: loop4: p1 p2 p3 Apr 20 19:24:41.916009 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:41.916758 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:41.927661 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:41.928972 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:41.932582 (sd-merge)[1519]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:24:42.054514 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:42.581299 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:24:42.623209 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 19:24:42.813112 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 19:24:42.846489 kernel: loop6: p1 p2 p3 Apr 20 19:24:43.264250 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:43.261848 (sd-merge)[1519]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 19:24:43.273416 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:24:43.273452 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:24:43.273471 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:24:43.273511 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:24:43.754036 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 19:24:43.777739 (sd-merge)[1519]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:24:43.846697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 19:24:43.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:43.885313 kernel: audit: type=1130 audit(1776713083.861:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:43.922442 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:24:43.926834 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:24:44.253029 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 19:24:44.276998 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 19:24:44.335606 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 19:24:44.376657 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 20 19:24:44.394257 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 20 19:24:44.541743 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:24:44.543871 systemd-tmpfiles[1536]: Skipping /boot Apr 20 19:24:44.749002 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:24:44.752576 systemd-tmpfiles[1536]: Skipping /boot Apr 20 19:24:45.828746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:24:45.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:45.857477 kernel: audit: type=1130 audit(1776713085.842:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:46.070588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 19:24:46.113798 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 19:24:46.215502 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 19:24:46.258843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 19:24:46.321938 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 19:24:46.424000 audit[1555]: AUDIT1127 pid=1555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:24:46.452843 kernel: audit: type=1127 audit(1776713086.424:161): pid=1555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:24:46.545561 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 19:24:46.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:46.568128 kernel: audit: type=1130 audit(1776713086.551:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:46.577956 augenrules[1566]: No rules Apr 20 19:24:46.576000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:24:46.578886 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 19:24:46.576000 audit[1566]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa114ce60 a2=420 a3=0 items=0 ppid=1542 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:24:46.584969 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 19:24:46.618681 kernel: audit: type=1305 audit(1776713086.576:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:24:46.619422 kernel: audit: type=1300 audit(1776713086.576:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa114ce60 a2=420 a3=0 items=0 ppid=1542 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:24:46.576000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:24:46.632229 kernel: audit: type=1327 audit(1776713086.576:163): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:24:46.648681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 19:24:46.862306 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 19:24:46.883911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 19:24:57.445253 ldconfig[1550]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 19:24:57.562969 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 19:24:57.666858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 19:24:58.068954 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 19:24:58.107023 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:24:58.127794 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 19:24:58.138740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 19:24:58.151559 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 19:24:58.170292 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 19:24:58.197824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 19:24:58.209917 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 19:24:58.226772 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 19:24:58.243248 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 19:24:58.283107 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 19:24:58.283510 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:24:58.313674 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:24:58.348684 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 19:24:58.478611 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 19:24:58.542653 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 19:24:58.650882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 19:24:58.677036 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 19:24:58.707908 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 19:24:58.748698 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 19:24:58.813953 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 19:24:58.855774 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:24:58.944861 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:24:58.956373 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:24:58.964767 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:24:58.990899 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 19:24:59.035597 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 19:24:59.072811 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 19:24:59.152291 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 19:24:59.171152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 19:24:59.200470 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 19:24:59.215742 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 19:24:59.245634 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 19:24:59.272032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:24:59.371990 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 19:24:59.414007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 19:24:59.436702 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 19:24:59.446160 jq[1585]: false Apr 20 19:24:59.473409 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 19:24:59.505694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 19:24:59.522007 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing passwd entry cache Apr 20 19:24:59.521895 oslogin_cache_refresh[1587]: Refreshing passwd entry cache Apr 20 19:24:59.536794 extend-filesystems[1586]: Found /dev/vda6 Apr 20 19:24:59.555821 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 19:24:59.562797 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 19:24:59.574978 extend-filesystems[1586]: Found /dev/vda9 Apr 20 19:24:59.579774 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 19:24:59.585296 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting users, quitting Apr 20 19:24:59.585296 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:24:59.584367 oslogin_cache_refresh[1587]: Failure getting users, quitting Apr 20 19:24:59.585917 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing group entry cache Apr 20 19:24:59.584463 oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:24:59.585659 oslogin_cache_refresh[1587]: Refreshing group entry cache Apr 20 19:24:59.587719 extend-filesystems[1586]: Checking size of /dev/vda9 Apr 20 19:24:59.644703 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 19:24:59.656104 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting groups, quitting Apr 20 19:24:59.656104 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:24:59.655954 oslogin_cache_refresh[1587]: Failure getting groups, quitting Apr 20 19:24:59.655973 oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:24:59.687030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 19:24:59.696899 extend-filesystems[1586]: Resized partition /dev/vda9 Apr 20 19:24:59.705425 extend-filesystems[1625]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 19:24:59.699113 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 19:24:59.716851 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 19:24:59.713687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 19:24:59.717801 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 19:24:59.721574 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 19:24:59.768405 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 19:24:59.770965 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 19:24:59.776204 jq[1614]: true Apr 20 19:24:59.825671 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 19:24:59.829738 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 19:24:59.867801 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 19:24:59.841751 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 19:24:59.958040 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 19:24:59.958040 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 19:24:59.958040 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 19:25:00.065650 extend-filesystems[1586]: Resized filesystem in /dev/vda9 Apr 20 19:24:59.969289 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 19:24:59.974999 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 19:25:00.228788 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 19:25:00.238284 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 19:25:00.247704 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 19:25:00.381306 jq[1633]: true Apr 20 19:25:00.382795 tar[1631]: linux-amd64/LICENSE Apr 20 19:25:00.382795 tar[1631]: linux-amd64/helm Apr 20 19:25:00.430176 update_engine[1606]: I20260420 19:25:00.429291 1606 main.cc:92] Flatcar Update Engine starting Apr 20 19:25:00.877416 systemd-logind[1602]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 19:25:00.878205 systemd-logind[1602]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 19:25:00.879771 systemd-logind[1602]: New seat seat0. Apr 20 19:25:00.885408 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 19:25:01.076168 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Apr 20 19:25:01.126130 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 19:25:01.149601 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 19:25:01.358011 dbus-daemon[1583]: [system] SELinux support is enabled Apr 20 19:25:01.429257 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 19:25:01.844798 sshd_keygen[1611]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 19:25:01.856469 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 19:25:01.882818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 19:25:01.888399 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 19:25:01.929129 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 19:25:01.929484 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 19:25:01.943726 systemd[1]: Started update-engine.service - Update Engine. Apr 20 19:25:02.056121 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 19:25:02.081918 update_engine[1606]: I20260420 19:25:02.076925 1606 update_check_scheduler.cc:74] Next update check in 2m51s Apr 20 19:25:02.580888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 19:25:02.637745 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 19:25:02.856588 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 19:25:02.865574 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 19:25:02.963604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 19:25:03.168894 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 19:25:03.226312 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 19:25:03.241495 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 19:25:03.251538 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 19:25:03.828016 locksmithd[1698]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 19:25:05.938796 containerd[1634]: time="2026-04-20T19:25:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 19:25:06.141133 containerd[1634]: time="2026-04-20T19:25:06.108170451Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 19:25:06.370511 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 19:25:06.428728 systemd[1]: Started sshd@0-1-10.0.0.20:22-10.0.0.1:44514.service - OpenSSH per-connection server daemon (10.0.0.1:44514). Apr 20 19:25:06.825218 containerd[1634]: time="2026-04-20T19:25:06.811776898Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="335.08µs" Apr 20 19:25:06.830800 containerd[1634]: time="2026-04-20T19:25:06.830177578Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 19:25:06.841847 containerd[1634]: time="2026-04-20T19:25:06.841582786Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 19:25:06.841847 containerd[1634]: time="2026-04-20T19:25:06.841871252Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 19:25:07.435484 containerd[1634]: time="2026-04-20T19:25:07.303998582Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 19:25:07.470856 containerd[1634]: time="2026-04-20T19:25:07.469755847Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 19:25:07.491118 containerd[1634]: time="2026-04-20T19:25:07.483370604Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:25:07.502198 containerd[1634]: time="2026-04-20T19:25:07.501947804Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:25:07.536441 containerd[1634]: time="2026-04-20T19:25:07.508888159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:25:07.661950 containerd[1634]: time="2026-04-20T19:25:07.636906290Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:25:07.675526 containerd[1634]: time="2026-04-20T19:25:07.668831111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:25:07.681880 containerd[1634]: time="2026-04-20T19:25:07.681440462Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:25:07.690177 containerd[1634]: time="2026-04-20T19:25:07.685232307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 19:25:07.720479 containerd[1634]: time="2026-04-20T19:25:07.705719465Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 19:25:07.731160 containerd[1634]: time="2026-04-20T19:25:07.729252395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 19:25:07.812166 containerd[1634]: time="2026-04-20T19:25:07.809313102Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:25:07.847427 containerd[1634]: time="2026-04-20T19:25:07.845984628Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:25:08.026563 containerd[1634]: time="2026-04-20T19:25:08.002462543Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 19:25:08.053138 containerd[1634]: time="2026-04-20T19:25:08.047134895Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 19:25:08.078951 containerd[1634]: time="2026-04-20T19:25:08.078776364Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 19:25:08.086318 containerd[1634]: time="2026-04-20T19:25:08.084482811Z" level=info msg="metadata content store policy set" policy=shared Apr 20 19:25:08.158410 containerd[1634]: time="2026-04-20T19:25:08.157191802Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 19:25:08.179332 containerd[1634]: time="2026-04-20T19:25:08.177742711Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 19:25:08.179332 containerd[1634]: time="2026-04-20T19:25:08.178927260Z" level=info msg="built-in NRI default validator is disabled" Apr 20 19:25:08.179332 containerd[1634]: time="2026-04-20T19:25:08.178948869Z" level=info msg="runtime interface created" Apr 20 19:25:08.179332 containerd[1634]: time="2026-04-20T19:25:08.178954276Z" level=info msg="created NRI interface" Apr 20 19:25:08.179332 containerd[1634]: time="2026-04-20T19:25:08.179014727Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:25:08.224976 containerd[1634]: time="2026-04-20T19:25:08.218991737Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:25:08.243789 containerd[1634]: time="2026-04-20T19:25:08.230526503Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 19:25:08.249583 containerd[1634]: time="2026-04-20T19:25:08.247637925Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 19:25:08.249583 containerd[1634]: time="2026-04-20T19:25:08.248102930Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 19:25:08.280789 containerd[1634]: time="2026-04-20T19:25:08.280103184Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 19:25:08.305628 containerd[1634]: time="2026-04-20T19:25:08.299149233Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 19:25:08.311517 containerd[1634]: time="2026-04-20T19:25:08.304412555Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 19:25:08.324231 containerd[1634]: time="2026-04-20T19:25:08.317317015Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 19:25:08.336200 containerd[1634]: time="2026-04-20T19:25:08.329979758Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 19:25:08.338517 containerd[1634]: time="2026-04-20T19:25:08.337485751Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 19:25:08.343942 containerd[1634]: time="2026-04-20T19:25:08.341870079Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 19:25:08.343942 containerd[1634]: time="2026-04-20T19:25:08.342284269Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 19:25:08.343942 containerd[1634]: time="2026-04-20T19:25:08.342307194Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 19:25:08.362185 containerd[1634]: time="2026-04-20T19:25:08.360360792Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 19:25:08.368532 containerd[1634]: time="2026-04-20T19:25:08.366697570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 19:25:08.374356 containerd[1634]: time="2026-04-20T19:25:08.372315156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 19:25:08.380981 containerd[1634]: time="2026-04-20T19:25:08.380503695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 19:25:08.457380 containerd[1634]: time="2026-04-20T19:25:08.455589348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.461981046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.462523867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.462668332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.462765696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.462781476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.464683602Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 19:25:08.465210 containerd[1634]: time="2026-04-20T19:25:08.465000210Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 19:25:08.525120 containerd[1634]: time="2026-04-20T19:25:08.512187704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 19:25:08.547314 containerd[1634]: time="2026-04-20T19:25:08.542410117Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 19:25:08.553333 containerd[1634]: time="2026-04-20T19:25:08.546235948Z" level=info msg="Start snapshots syncer" Apr 20 19:25:08.560381 containerd[1634]: time="2026-04-20T19:25:08.558243159Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 19:25:08.677121 tar[1631]: linux-amd64/README.md Apr 20 19:25:09.183975 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 44514 ssh2: RSA SHA256:Vuw28g2Hzc/9RcV0fwPqovqZniOEIBgxzeZWxyql2YY Apr 20 19:25:09.227623 containerd[1634]: time="2026-04-20T19:25:09.193895244Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 19:25:09.227623 containerd[1634]: time="2026-04-20T19:25:09.194817566Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 19:25:09.226796 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 19:25:09.233457 containerd[1634]: time="2026-04-20T19:25:09.210110788Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 19:25:09.233457 containerd[1634]: time="2026-04-20T19:25:09.224344151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 19:25:09.233457 containerd[1634]: time="2026-04-20T19:25:09.224866461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 19:25:09.233457 containerd[1634]: time="2026-04-20T19:25:09.229482484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 19:25:09.233457 containerd[1634]: time="2026-04-20T19:25:09.229908359Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 19:25:09.237334 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:09.263115 containerd[1634]: time="2026-04-20T19:25:09.242897666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 19:25:09.337528 containerd[1634]: time="2026-04-20T19:25:09.279929958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 19:25:09.343480 containerd[1634]: time="2026-04-20T19:25:09.343107511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 19:25:09.383642 containerd[1634]: time="2026-04-20T19:25:09.371648225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 19:25:09.389451 containerd[1634]: time="2026-04-20T19:25:09.387766398Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 19:25:09.396496 containerd[1634]: time="2026-04-20T19:25:09.393221236Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:25:09.423829 containerd[1634]: time="2026-04-20T19:25:09.409194109Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:25:09.427986 containerd[1634]: time="2026-04-20T19:25:09.426184034Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:25:09.506773 containerd[1634]: time="2026-04-20T19:25:09.492335524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:25:09.515451 containerd[1634]: time="2026-04-20T19:25:09.510133845Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 19:25:09.526594 containerd[1634]: time="2026-04-20T19:25:09.521358174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 19:25:09.526594 containerd[1634]: time="2026-04-20T19:25:09.524018187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 19:25:09.526594 containerd[1634]: time="2026-04-20T19:25:09.524199109Z" level=info msg="Connect containerd service" Apr 20 19:25:09.526594 containerd[1634]: time="2026-04-20T19:25:09.524670099Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 19:25:09.528168 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 19:25:09.589900 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 19:25:09.729513 containerd[1634]: time="2026-04-20T19:25:09.697332327Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 19:25:09.786646 systemd-logind[1602]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:10.042991 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 19:25:10.071585 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 19:25:10.223396 (systemd)[1737]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:10.530942 systemd-logind[1602]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 19:25:11.508656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:25:12.026238 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:25:12.074021 containerd[1634]: time="2026-04-20T19:25:12.068142687Z" level=info msg="Start subscribing containerd event" Apr 20 19:25:12.124220 containerd[1634]: time="2026-04-20T19:25:12.087890293Z" level=info msg="Start recovering state" Apr 20 19:25:12.156135 containerd[1634]: time="2026-04-20T19:25:12.155778547Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 19:25:12.162102 containerd[1634]: time="2026-04-20T19:25:12.156403065Z" level=info msg="Start event monitor" Apr 20 19:25:12.177313 containerd[1634]: time="2026-04-20T19:25:12.172354417Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 19:25:12.217799 containerd[1634]: time="2026-04-20T19:25:12.216133175Z" level=info msg="Start cni network conf syncer for default" Apr 20 19:25:12.228819 containerd[1634]: time="2026-04-20T19:25:12.221819781Z" level=info msg="Start streaming server" Apr 20 19:25:12.234975 containerd[1634]: time="2026-04-20T19:25:12.233311892Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 19:25:12.240460 containerd[1634]: time="2026-04-20T19:25:12.239332668Z" level=info msg="runtime interface starting up..." Apr 20 19:25:12.243778 containerd[1634]: time="2026-04-20T19:25:12.242279555Z" level=info msg="starting plugins..." Apr 20 19:25:12.245811 containerd[1634]: time="2026-04-20T19:25:12.245506323Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 19:25:12.261664 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 19:25:12.279260 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 19:25:12.297492 containerd[1634]: time="2026-04-20T19:25:12.286877753Z" level=info msg="containerd successfully booted in 6.382150s" Apr 20 19:25:12.544956 systemd[1737]: Queued start job for default target default.target. Apr 20 19:25:12.565640 systemd[1737]: Created slice app.slice - User Application Slice. Apr 20 19:25:12.566251 systemd[1737]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 19:25:12.566274 systemd[1737]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:25:12.569844 systemd[1737]: Reached target paths.target - Paths. Apr 20 19:25:12.571715 systemd[1737]: Reached target timers.target - Timers. Apr 20 19:25:12.586665 systemd[1737]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 19:25:12.590979 systemd[1737]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:25:12.615613 systemd[1737]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 19:25:12.794709 systemd[1737]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 19:25:12.804569 systemd[1737]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 19:25:12.804764 systemd[1737]: Reached target sockets.target - Sockets. Apr 20 19:25:12.804809 systemd[1737]: Reached target basic.target - Basic System. Apr 20 19:25:12.804838 systemd[1737]: Reached target default.target - Main User Target. Apr 20 19:25:12.804864 systemd[1737]: Startup finished in 2.225s. Apr 20 19:25:12.810853 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 19:25:12.943679 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 19:25:12.955988 systemd[1]: Startup finished in 7.052s (kernel) + 25.703s (initrd) + 1min 2.828s (userspace) = 1min 35.584s. Apr 20 19:25:13.660901 systemd[1]: Started sshd@1-4097-10.0.0.20:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). Apr 20 19:25:15.276736 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:Vuw28g2Hzc/9RcV0fwPqovqZniOEIBgxzeZWxyql2YY Apr 20 19:25:15.376929 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:15.629742 systemd-logind[1602]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:15.689343 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 19:25:16.357804 sshd[1780]: Connection closed by 10.0.0.1 port 44524 Apr 20 19:25:16.389263 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Apr 20 19:25:16.519815 systemd[1]: sshd@1-4097-10.0.0.20:22-10.0.0.1:44524.service: Deactivated successfully. Apr 20 19:25:16.559890 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 19:25:16.605397 systemd-logind[1602]: Session 3 logged out. Waiting for processes to exit. Apr 20 19:25:16.668702 systemd[1]: Started sshd@2-8193-10.0.0.20:22-10.0.0.1:38018.service - OpenSSH per-connection server daemon (10.0.0.1:38018). Apr 20 19:25:16.728767 systemd-logind[1602]: Removed session 3. Apr 20 19:25:18.135423 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 38018 ssh2: RSA SHA256:Vuw28g2Hzc/9RcV0fwPqovqZniOEIBgxzeZWxyql2YY Apr 20 19:25:18.174577 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:18.383909 systemd-logind[1602]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:18.452202 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 19:25:18.727028 sshd[1790]: Connection closed by 10.0.0.1 port 38018 Apr 20 19:25:18.732583 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Apr 20 19:25:18.834278 systemd[1]: sshd@2-8193-10.0.0.20:22-10.0.0.1:38018.service: Deactivated successfully. Apr 20 19:25:18.939423 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 19:25:18.978812 systemd-logind[1602]: Session 4 logged out. Waiting for processes to exit. Apr 20 19:25:19.052012 systemd[1]: Started sshd@3-2-10.0.0.20:22-10.0.0.1:38026.service - OpenSSH per-connection server daemon (10.0.0.1:38026). Apr 20 19:25:19.084292 systemd-logind[1602]: Removed session 4. Apr 20 19:25:20.420885 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 38026 ssh2: RSA SHA256:Vuw28g2Hzc/9RcV0fwPqovqZniOEIBgxzeZWxyql2YY Apr 20 19:25:20.440660 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:20.674951 systemd-logind[1602]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:20.770456 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 19:25:21.032006 kubelet[1760]: E0420 19:25:21.026883 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:25:21.044583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:25:21.044744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:25:21.052900 systemd[1]: kubelet.service: Consumed 9.095s CPU time, 258.3M memory peak. Apr 20 19:25:21.378779 sshd[1801]: Connection closed by 10.0.0.1 port 38026 Apr 20 19:25:21.387200 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Apr 20 19:25:21.582272 systemd[1]: sshd@3-2-10.0.0.20:22-10.0.0.1:38026.service: Deactivated successfully. Apr 20 19:25:21.688900 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 19:25:21.766867 systemd-logind[1602]: Session 5 logged out. Waiting for processes to exit. Apr 20 19:25:21.877529 systemd[1]: Started sshd@4-12289-10.0.0.20:22-10.0.0.1:38042.service - OpenSSH per-connection server daemon (10.0.0.1:38042). Apr 20 19:25:21.959546 systemd-logind[1602]: Removed session 5. Apr 20 19:25:24.324180 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 38042 ssh2: RSA SHA256:Vuw28g2Hzc/9RcV0fwPqovqZniOEIBgxzeZWxyql2YY Apr 20 19:25:24.424996 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:24.810580 systemd-logind[1602]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:24.905028 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 19:25:26.223437 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 19:25:26.225439 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:25:31.360590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 19:25:31.440927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:25:37.155669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:25:37.331589 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:25:42.458371 kubelet[1841]: E0420 19:25:42.456879 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:25:42.484969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:25:42.491472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:25:42.519196 systemd[1]: kubelet.service: Consumed 5.064s CPU time, 109.5M memory peak. Apr 20 19:25:45.225531 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 19:25:45.428995 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 19:25:47.457688 update_engine[1606]: I20260420 19:25:47.428813 1606 update_attempter.cc:509] Updating boot flags... Apr 20 19:25:52.543855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 19:25:52.679414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:25:56.145532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:25:56.268622 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:25:59.275674 dockerd[1850]: time="2026-04-20T19:25:59.266643998Z" level=info msg="Starting up" Apr 20 19:25:59.470874 dockerd[1850]: time="2026-04-20T19:25:59.466362657Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 19:26:01.824711 dockerd[1850]: time="2026-04-20T19:26:01.207743067Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 19:26:02.631562 kubelet[1888]: E0420 19:26:02.627716 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:26:02.660027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:26:02.661464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:26:02.733932 systemd[1]: kubelet.service: Consumed 4.334s CPU time, 109.5M memory peak. Apr 20 19:26:04.853566 systemd[1]: var-lib-docker-metacopy\x2dcheck3777346939-merged.mount: Deactivated successfully. Apr 20 19:26:06.036027 dockerd[1850]: time="2026-04-20T19:26:06.026643438Z" level=info msg="Loading containers: start." Apr 20 19:26:06.587931 kernel: Initializing XFRM netlink socket Apr 20 19:26:12.811399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 19:26:12.976827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:26:15.245297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:26:15.337961 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:26:21.768026 kubelet[1974]: E0420 19:26:21.765819 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:26:21.815629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:26:21.821288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:26:21.841471 systemd[1]: kubelet.service: Consumed 4.022s CPU time, 110.7M memory peak. Apr 20 19:26:32.031919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 19:26:32.186025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:26:35.066816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:26:35.114704 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:26:35.166696 systemd-networkd[1426]: docker0: Link UP Apr 20 19:26:35.625512 dockerd[1850]: time="2026-04-20T19:26:35.624398821Z" level=info msg="Loading containers: done." Apr 20 19:26:37.123916 dockerd[1850]: time="2026-04-20T19:26:37.119436356Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 19:26:37.288549 dockerd[1850]: time="2026-04-20T19:26:37.276259972Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 19:26:37.330858 dockerd[1850]: time="2026-04-20T19:26:37.318589805Z" level=info msg="Initializing buildkit" Apr 20 19:26:37.937403 dockerd[1850]: time="2026-04-20T19:26:37.924799872Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:26:37.947767 dockerd[1850]: time="2026-04-20T19:26:37.943811339Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:26:40.257750 dockerd[1850]: time="2026-04-20T19:26:40.257406533Z" level=info msg="Completed buildkit initialization" Apr 20 19:26:43.162171 dockerd[1850]: time="2026-04-20T19:26:43.153978500Z" level=info msg="Daemon has completed initialization" Apr 20 19:26:43.162171 dockerd[1850]: time="2026-04-20T19:26:43.157406784Z" level=info msg="API listen on /run/docker.sock" Apr 20 19:26:43.190834 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 19:26:48.090361 kubelet[2093]: E0420 19:26:48.087851 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:26:48.271199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:26:48.281291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:26:48.346008 systemd[1]: kubelet.service: Consumed 7.539s CPU time, 112.4M memory peak. Apr 20 19:26:58.426815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 19:26:58.635662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:27:02.843832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:27:02.979368 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:27:13.074702 kubelet[2154]: E0420 19:27:13.073892 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:27:13.123407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:27:13.130674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:27:13.157403 systemd[1]: kubelet.service: Consumed 7.346s CPU time, 109M memory peak. Apr 20 19:27:21.421730 containerd[1634]: time="2026-04-20T19:27:21.417599497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.1\"" Apr 20 19:27:23.339998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 19:27:23.450973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:27:26.265107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:27:26.322614 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:27:36.391891 containerd[1634]: time="2026-04-20T19:27:36.362784491Z" level=info msg="fetch failed" error="failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.35.1\": net/http: TLS handshake timeout" host=registry.k8s.io Apr 20 19:27:36.515016 containerd[1634]: time="2026-04-20T19:27:36.500625747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.1: active requests=0, bytes read=0" Apr 20 19:27:36.560325 containerd[1634]: time="2026-04-20T19:27:36.516659530Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.1\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.35.1\": failed to resolve image: failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.35.1\": net/http: TLS handshake timeout" Apr 20 19:27:36.859148 containerd[1634]: time="2026-04-20T19:27:36.858591085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.1\"" Apr 20 19:27:37.389293 kubelet[2174]: E0420 19:27:37.385885 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:27:37.447239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:27:37.449930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:27:37.453209 systemd[1]: kubelet.service: Consumed 6.867s CPU time, 112.2M memory peak. Apr 20 19:27:47.613911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 19:27:47.732875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:27:51.524921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:27:51.579666 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:27:53.491437 update_engine[1606]: I20260420 19:27:53.489334 1606 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 19:27:53.616948 update_engine[1606]: I20260420 19:27:53.515434 1606 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 19:27:53.616948 update_engine[1606]: I20260420 19:27:53.520714 1606 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 19:27:53.616948 update_engine[1606]: I20260420 19:27:53.584141 1606 omaha_request_params.cc:62] Current group set to alpha Apr 20 19:27:53.640311 update_engine[1606]: I20260420 19:27:53.638124 1606 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 19:27:53.640311 update_engine[1606]: I20260420 19:27:53.638219 1606 update_attempter.cc:643] Scheduling an action processor start. Apr 20 19:27:53.640311 update_engine[1606]: I20260420 19:27:53.638242 1606 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:27:53.653366 update_engine[1606]: I20260420 19:27:53.645609 1606 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 19:27:53.669544 update_engine[1606]: I20260420 19:27:53.666950 1606 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:27:53.680344 update_engine[1606]: I20260420 19:27:53.675879 1606 omaha_request_action.cc:272] Request: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.680344 update_engine[1606]: Apr 20 19:27:53.717371 update_engine[1606]: I20260420 19:27:53.683451 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:27:53.783796 update_engine[1606]: I20260420 19:27:53.761204 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:27:53.811174 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 19:27:54.100668 update_engine[1606]: I20260420 19:27:54.085474 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:27:54.122651 update_engine[1606]: E20260420 19:27:54.120574 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:27:54.166270 update_engine[1606]: I20260420 19:27:54.160367 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 19:27:57.276305 kubelet[2193]: E0420 19:27:57.273290 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:27:57.332596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:27:57.340461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:27:57.347414 systemd[1]: kubelet.service: Consumed 4.835s CPU time, 109.3M memory peak. Apr 20 19:28:01.879467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260013535.mount: Deactivated successfully. Apr 20 19:28:04.391040 update_engine[1606]: I20260420 19:28:04.383304 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:28:04.428585 update_engine[1606]: I20260420 19:28:04.398972 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:28:04.432779 update_engine[1606]: I20260420 19:28:04.426640 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:28:04.489420 update_engine[1606]: E20260420 19:28:04.485565 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:28:04.527337 update_engine[1606]: I20260420 19:28:04.525246 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 19:28:07.664422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 19:28:07.783666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:28:12.278752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:28:12.549488 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:28:14.413628 update_engine[1606]: I20260420 19:28:14.392385 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:28:14.423216 update_engine[1606]: I20260420 19:28:14.422093 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:28:14.451784 update_engine[1606]: I20260420 19:28:14.451642 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:28:14.531481 update_engine[1606]: E20260420 19:28:14.525487 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:28:14.543451 update_engine[1606]: I20260420 19:28:14.542835 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 19:28:24.422414 update_engine[1606]: I20260420 19:28:24.390883 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:28:24.422414 update_engine[1606]: I20260420 19:28:24.420696 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:28:24.465436 update_engine[1606]: I20260420 19:28:24.455480 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:28:24.505133 update_engine[1606]: E20260420 19:28:24.502578 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:28:24.512034 kubelet[2223]: E0420 19:28:24.509395 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:28:24.515419 update_engine[1606]: I20260420 19:28:24.509696 1606 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:28:24.515419 update_engine[1606]: I20260420 19:28:24.510761 1606 omaha_request_action.cc:617] Omaha request response: Apr 20 19:28:24.515419 update_engine[1606]: E20260420 19:28:24.515211 1606 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 19:28:24.523426 update_engine[1606]: I20260420 19:28:24.521342 1606 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 19:28:24.523426 update_engine[1606]: I20260420 19:28:24.522377 1606 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:28:24.523426 update_engine[1606]: I20260420 19:28:24.522419 1606 update_attempter.cc:306] Processing Done. Apr 20 19:28:24.524562 update_engine[1606]: E20260420 19:28:24.524099 1606 update_attempter.cc:619] Update failed. Apr 20 19:28:24.524562 update_engine[1606]: I20260420 19:28:24.524179 1606 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 19:28:24.524562 update_engine[1606]: I20260420 19:28:24.524185 1606 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 19:28:24.524562 update_engine[1606]: I20260420 19:28:24.524223 1606 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 19:28:24.524663 update_engine[1606]: I20260420 19:28:24.524582 1606 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:28:24.526490 update_engine[1606]: I20260420 19:28:24.524827 1606 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:28:24.530554 update_engine[1606]: I20260420 19:28:24.525748 1606 omaha_request_action.cc:272] Request: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: Apr 20 19:28:24.530554 update_engine[1606]: I20260420 19:28:24.530243 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:28:24.531269 update_engine[1606]: I20260420 19:28:24.530743 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:28:24.532158 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 19:28:24.540035 update_engine[1606]: I20260420 19:28:24.534590 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:28:24.553744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:28:24.563628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:28:24.585276 update_engine[1606]: E20260420 19:28:24.584728 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:28:24.607374 update_engine[1606]: I20260420 19:28:24.604733 1606 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:28:24.588862 systemd[1]: kubelet.service: Consumed 8.718s CPU time, 109.1M memory peak. Apr 20 19:28:24.610869 update_engine[1606]: I20260420 19:28:24.605949 1606 omaha_request_action.cc:617] Omaha request response: Apr 20 19:28:24.622963 update_engine[1606]: I20260420 19:28:24.617367 1606 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:28:24.622963 update_engine[1606]: I20260420 19:28:24.624718 1606 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:28:24.622963 update_engine[1606]: I20260420 19:28:24.625972 1606 update_attempter.cc:306] Processing Done. Apr 20 19:28:24.642829 update_engine[1606]: I20260420 19:28:24.630709 1606 update_attempter.cc:310] Error event sent. Apr 20 19:28:24.642829 update_engine[1606]: I20260420 19:28:24.631459 1606 update_check_scheduler.cc:74] Next update check in 41m29s Apr 20 19:28:24.743759 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 19:28:34.828187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 19:28:34.921723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:28:40.190003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:28:40.251376 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:28:49.835304 kubelet[2239]: E0420 19:28:49.831969 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:28:49.880120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:28:49.880934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:28:49.935812 systemd[1]: kubelet.service: Consumed 7.261s CPU time, 110M memory peak. Apr 20 19:29:00.155980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 20 19:29:00.382579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:29:06.709543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:29:07.224324 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:29:35.171339 kubelet[2256]: E0420 19:29:35.135908 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:29:35.233384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:29:35.245271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:29:35.327382 systemd[1]: kubelet.service: Consumed 18.858s CPU time, 111.1M memory peak. Apr 20 19:29:45.428670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 20 19:29:45.623359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:29:52.770829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:29:52.907653 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:30:15.244221 kubelet[2290]: E0420 19:30:15.240518 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:30:15.331445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:30:15.339443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:30:15.359316 systemd[1]: kubelet.service: Consumed 15.269s CPU time, 112.3M memory peak. Apr 20 19:30:25.592196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 20 19:30:25.689949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:30:30.692737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:30:30.812939 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:30:44.568819 systemd[1737]: Created slice background.slice - User Background Tasks Slice. Apr 20 19:30:44.639696 systemd[1737]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 19:30:45.029961 systemd[1737]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 19:30:48.316832 containerd[1634]: time="2026-04-20T19:30:48.313955411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.1: active requests=0, bytes read=27684211" Apr 20 19:30:48.342319 containerd[1634]: time="2026-04-20T19:30:48.317017060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:30:49.261489 containerd[1634]: time="2026-04-20T19:30:49.169922905Z" level=info msg="ImageCreate event name:\"sha256:6f9eeb0cff9812c46738ee2fb811ca962aba8994283b4468ac9226f7ee65c54a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:30:51.964634 containerd[1634]: time="2026-04-20T19:30:51.961845463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:011838b85f65454b95a013b2b902dd506789fd07f9abc84e52eb2b6a044cd392\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:30:52.976446 containerd[1634]: time="2026-04-20T19:30:52.973477176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.1\" with image id \"sha256:6f9eeb0cff9812c46738ee2fb811ca962aba8994283b4468ac9226f7ee65c54a\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:011838b85f65454b95a013b2b902dd506789fd07f9abc84e52eb2b6a044cd392\", size \"27691714\" in 3m16.075404298s" Apr 20 19:30:53.044031 containerd[1634]: time="2026-04-20T19:30:53.033259909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.1\" returns image reference \"sha256:6f9eeb0cff9812c46738ee2fb811ca962aba8994283b4468ac9226f7ee65c54a\"" Apr 20 19:30:53.360815 containerd[1634]: time="2026-04-20T19:30:53.357663023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.1\"" Apr 20 19:30:56.559544 kubelet[2342]: E0420 19:30:56.548992 2342 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:30:56.649034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:30:56.656320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:30:56.687396 systemd[1]: kubelet.service: Consumed 16.828s CPU time, 110.6M memory peak. Apr 20 19:31:06.937274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 20 19:31:07.051236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:31:14.860493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:31:15.034668 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:31:23.720412 kubelet[2364]: E0420 19:31:23.718805 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:31:23.777313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:31:23.782555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:31:23.802637 systemd[1]: kubelet.service: Consumed 8.230s CPU time, 111M memory peak. Apr 20 19:31:34.046183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 20 19:31:34.127549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:31:37.922466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:31:38.110530 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:31:43.730272 kubelet[2381]: E0420 19:31:43.729587 2381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:31:43.848601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:31:43.852674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:31:43.864827 systemd[1]: kubelet.service: Consumed 5.084s CPU time, 110.6M memory peak. Apr 20 19:31:48.510608 containerd[1634]: time="2026-04-20T19:31:48.492571204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:31:48.583108 containerd[1634]: time="2026-04-20T19:31:48.573897775Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.1: active requests=0, bytes read=21445008" Apr 20 19:31:49.338493 containerd[1634]: time="2026-04-20T19:31:49.308784570Z" level=info msg="ImageCreate event name:\"sha256:8d7002962c4843ec8e0a4daa875d19df608b0a2eb84fc6ee7106c1ab81e07e9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:31:54.082619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 20 19:31:54.336930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:31:54.577624 containerd[1634]: time="2026-04-20T19:31:54.565409752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:9fb295baa9d68543d7bbecc23e16fcdf85c8c06680f91e628535aa6fbe180dbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:31:55.648162 containerd[1634]: time="2026-04-20T19:31:55.647284181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.1\" with image id \"sha256:8d7002962c4843ec8e0a4daa875d19df608b0a2eb84fc6ee7106c1ab81e07e9e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:9fb295baa9d68543d7bbecc23e16fcdf85c8c06680f91e628535aa6fbe180dbd\", size \"23140660\" in 1m2.269964829s" Apr 20 19:31:55.667282 containerd[1634]: time="2026-04-20T19:31:55.648452259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.1\" returns image reference \"sha256:8d7002962c4843ec8e0a4daa875d19df608b0a2eb84fc6ee7106c1ab81e07e9e\"" Apr 20 19:31:55.834891 containerd[1634]: time="2026-04-20T19:31:55.834245881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.1\"" Apr 20 19:31:59.252893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:31:59.429592 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:32:13.098697 kubelet[2398]: E0420 19:32:13.096204 2398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:32:13.193016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:32:13.234395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:32:13.243872 systemd[1]: kubelet.service: Consumed 9.016s CPU time, 112.3M memory peak. Apr 20 19:32:23.407967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 20 19:32:23.526158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:32:27.317690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:32:27.457993 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:32:32.391808 containerd[1634]: time="2026-04-20T19:32:32.380276522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:32:32.556699 containerd[1634]: time="2026-04-20T19:32:32.496040965Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.1: active requests=0, bytes read=15545845" Apr 20 19:32:32.990196 kubelet[2419]: E0420 19:32:32.986804 2419 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:32:33.137878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:32:33.141820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:32:33.204759 systemd[1]: kubelet.service: Consumed 4.008s CPU time, 109.3M memory peak. Apr 20 19:32:33.715708 containerd[1634]: time="2026-04-20T19:32:33.696591047Z" level=info msg="ImageCreate event name:\"sha256:5f2a969bc7a43f057f9079ae7ec159afc993a01ffcdd779b287c7a3eeb3951c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:32:37.938368 containerd[1634]: time="2026-04-20T19:32:37.932785555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fc251ed4b8a03830bb8f75fb5fe983b3b0b5cc15a9c066d8f6c5d2e547deece8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:32:38.964143 containerd[1634]: time="2026-04-20T19:32:38.956759871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.1\" with image id \"sha256:5f2a969bc7a43f057f9079ae7ec159afc993a01ffcdd779b287c7a3eeb3951c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fc251ed4b8a03830bb8f75fb5fe983b3b0b5cc15a9c066d8f6c5d2e547deece8\", size \"17239626\" in 43.112461741s" Apr 20 19:32:39.004006 containerd[1634]: time="2026-04-20T19:32:38.983532111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.1\" returns image reference \"sha256:5f2a969bc7a43f057f9079ae7ec159afc993a01ffcdd779b287c7a3eeb3951c1\"" Apr 20 19:32:39.229540 containerd[1634]: time="2026-04-20T19:32:39.217656917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.1\"" Apr 20 19:32:43.529365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 20 19:32:43.721342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:32:50.853648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:32:51.021651 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:33:04.432544 kubelet[2438]: E0420 19:33:04.399785 2438 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:33:04.556629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:33:04.563579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:33:04.666107 systemd[1]: kubelet.service: Consumed 9.505s CPU time, 112M memory peak. Apr 20 19:33:14.891694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 20 19:33:15.157302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:33:22.322984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:33:22.892403 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:33:28.997938 kubelet[2460]: E0420 19:33:28.992040 2460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:33:29.024280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:33:29.029531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:33:29.034740 systemd[1]: kubelet.service: Consumed 5.846s CPU time, 109.1M memory peak. Apr 20 19:33:39.330360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 20 19:33:39.377838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:33:44.150831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:33:44.229098 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:33:51.771999 kubelet[2477]: E0420 19:33:51.769302 2477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:33:51.852991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:33:51.857784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:33:51.942471 systemd[1]: kubelet.service: Consumed 6.309s CPU time, 111M memory peak. Apr 20 19:34:02.090942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 20 19:34:02.194640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:34:07.786713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:34:07.954745 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:34:19.571636 kubelet[2494]: E0420 19:34:19.570904 2494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:34:19.615910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:34:19.617674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:34:19.632922 systemd[1]: kubelet.service: Consumed 8.681s CPU time, 109.4M memory peak. Apr 20 19:34:29.818943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 20 19:34:29.875528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:34:35.462022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:34:35.575967 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:34:47.356445 kubelet[2512]: E0420 19:34:47.353959 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:34:47.416549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:34:47.417721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:34:47.436024 systemd[1]: kubelet.service: Consumed 8.881s CPU time, 110.7M memory peak. Apr 20 19:34:57.673360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 20 19:34:57.775576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:35:02.732909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:35:02.783349 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:35:10.810725 kubelet[2534]: E0420 19:35:10.810403 2534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:35:10.868916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:35:10.878021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:35:10.921815 systemd[1]: kubelet.service: Consumed 7.232s CPU time, 112.5M memory peak. Apr 20 19:35:21.057356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 20 19:35:21.180178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:35:26.472914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:35:26.549349 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:35:35.329730 kubelet[2550]: E0420 19:35:35.323951 2550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:35:35.377426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:35:35.377853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:35:35.388657 systemd[1]: kubelet.service: Consumed 8.295s CPU time, 109.4M memory peak. Apr 20 19:35:40.522767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191992854.mount: Deactivated successfully. Apr 20 19:35:45.578960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 20 19:35:45.764911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:35:50.528158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:35:50.639647 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:35:57.860436 kubelet[2571]: E0420 19:35:57.857545 2571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:35:58.001803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:35:58.002018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:35:58.012964 systemd[1]: kubelet.service: Consumed 7.651s CPU time, 108.8M memory peak. Apr 20 19:36:04.474737 containerd[1634]: time="2026-04-20T19:36:04.465032713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:36:04.521702 containerd[1634]: time="2026-04-20T19:36:04.519485806Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.1: active requests=0, bytes read=25680427" Apr 20 19:36:04.971680 containerd[1634]: time="2026-04-20T19:36:04.967452315Z" level=info msg="ImageCreate event name:\"sha256:6521110cdb017762a8780b67128baef82ee1e0fd8a91f9a0e42b265eab7807b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:36:07.320880 containerd[1634]: time="2026-04-20T19:36:07.314946872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a832f1cece7252b2e52294be5a59b7579ccde35202ad63e09e9f4f04c5676435\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:36:08.050298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 20 19:36:08.075917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:36:08.413909 containerd[1634]: time="2026-04-20T19:36:08.392818139Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.1\" with image id \"sha256:6521110cdb017762a8780b67128baef82ee1e0fd8a91f9a0e42b265eab7807b9\", repo tag \"registry.k8s.io/kube-proxy:v1.35.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:a832f1cece7252b2e52294be5a59b7579ccde35202ad63e09e9f4f04c5676435\", size \"25682526\" in 3m29.148285472s" Apr 20 19:36:08.437274 containerd[1634]: time="2026-04-20T19:36:08.418774000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.1\" returns image reference \"sha256:6521110cdb017762a8780b67128baef82ee1e0fd8a91f9a0e42b265eab7807b9\"" Apr 20 19:36:08.631324 containerd[1634]: time="2026-04-20T19:36:08.630004442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 20 19:36:13.969268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:36:14.055432 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:36:19.875098 kubelet[2587]: E0420 19:36:19.871671 2587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:36:19.929369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:36:19.929542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:36:19.932727 systemd[1]: kubelet.service: Consumed 6.768s CPU time, 110M memory peak. Apr 20 19:36:29.195478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185160046.mount: Deactivated successfully. Apr 20 19:36:30.058989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Apr 20 19:36:30.156907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:36:35.137631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:36:35.292938 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:36:53.859930 kubelet[2617]: E0420 19:36:53.854023 2617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:36:53.965253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:36:53.967206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:36:53.993796 systemd[1]: kubelet.service: Consumed 15.119s CPU time, 109.9M memory peak. Apr 20 19:37:04.044144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27. Apr 20 19:37:04.083588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:37:09.024575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:37:09.120113 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:37:16.171386 kubelet[2634]: E0420 19:37:16.170679 2634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:37:16.193965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:37:16.195983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:37:16.203694 systemd[1]: kubelet.service: Consumed 7.252s CPU time, 110.9M memory peak. Apr 20 19:37:26.312921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 28. Apr 20 19:37:26.328555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:37:31.296922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:37:31.318621 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:37:38.473721 kubelet[2650]: E0420 19:37:38.469966 2650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:37:38.542419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:37:38.549756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:37:38.566347 systemd[1]: kubelet.service: Consumed 6.794s CPU time, 110.8M memory peak. Apr 20 19:37:48.574862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 29. Apr 20 19:37:48.625311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:37:54.853333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:37:54.990993 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:38:03.952596 kubelet[2668]: E0420 19:38:03.952405 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:38:03.997884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:38:04.052337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:38:04.089963 systemd[1]: kubelet.service: Consumed 8.758s CPU time, 110.6M memory peak. Apr 20 19:38:14.151252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 30. Apr 20 19:38:14.392301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:38:23.866745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:38:23.963204 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:38:30.895527 kubelet[2730]: E0420 19:38:30.888378 2730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:38:30.972400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:38:30.977763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:38:31.017894 systemd[1]: kubelet.service: Consumed 8.703s CPU time, 111.4M memory peak. Apr 20 19:38:40.752606 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 19:38:41.039379 systemd-tmpfiles[2741]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 19:38:41.039639 systemd-tmpfiles[2741]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 19:38:41.043700 systemd-tmpfiles[2741]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 19:38:41.047144 systemd-tmpfiles[2741]: ACLs are not supported, ignoring. Apr 20 19:38:41.048363 systemd-tmpfiles[2741]: ACLs are not supported, ignoring. Apr 20 19:38:41.062685 systemd-tmpfiles[2741]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:38:41.062698 systemd-tmpfiles[2741]: Skipping /boot Apr 20 19:38:41.153999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31. Apr 20 19:38:41.190678 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 19:38:41.220799 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 19:38:41.585539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:38:48.321204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:38:48.376393 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:38:51.688923 containerd[1634]: time="2026-04-20T19:38:51.676921557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:38:51.735993 containerd[1634]: time="2026-04-20T19:38:51.732571811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23545226" Apr 20 19:38:52.186208 containerd[1634]: time="2026-04-20T19:38:52.183314214Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:38:53.687544 kubelet[2750]: E0420 19:38:53.685463 2750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:38:53.769804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:38:53.774550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:38:53.786610 systemd[1]: kubelet.service: Consumed 6.968s CPU time, 110.8M memory peak. Apr 20 19:38:54.301367 containerd[1634]: time="2026-04-20T19:38:54.298740508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:38:54.945347 containerd[1634]: time="2026-04-20T19:38:54.933517365Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2m46.295084313s" Apr 20 19:38:54.968303 containerd[1634]: time="2026-04-20T19:38:54.964991217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 20 19:38:55.114190 containerd[1634]: time="2026-04-20T19:38:55.111653627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 19:39:04.049487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 32. Apr 20 19:39:04.215738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:39:07.350811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435436408.mount: Deactivated successfully. Apr 20 19:39:07.952602 containerd[1634]: time="2026-04-20T19:39:07.946915456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:39:07.978367 containerd[1634]: time="2026-04-20T19:39:07.977712121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=1, bytes read=297819" Apr 20 19:39:08.658067 containerd[1634]: time="2026-04-20T19:39:08.654996722Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:39:10.592668 containerd[1634]: time="2026-04-20T19:39:10.588776923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:39:11.139578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:39:11.282722 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:39:11.466320 containerd[1634]: time="2026-04-20T19:39:11.462599138Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 16.344882356s" Apr 20 19:39:11.480699 containerd[1634]: time="2026-04-20T19:39:11.466874342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 19:39:11.595981 containerd[1634]: time="2026-04-20T19:39:11.595690250Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 20 19:39:29.095964 kubelet[2772]: E0420 19:39:29.082890 2772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:39:29.174788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:39:29.179240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:39:29.193109 systemd[1]: kubelet.service: Consumed 14.050s CPU time, 110.7M memory peak. Apr 20 19:39:34.750354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334543406.mount: Deactivated successfully. Apr 20 19:39:39.309270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 33. Apr 20 19:39:39.447402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:39:46.773852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:39:46.837001 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:39:54.333008 kubelet[2811]: E0420 19:39:54.324445 2811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:39:54.370875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:39:54.371337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:39:54.438472 systemd[1]: kubelet.service: Consumed 8.297s CPU time, 111.8M memory peak. Apr 20 19:40:04.563630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 34. Apr 20 19:40:04.692028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:40:11.558344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:40:11.700550 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:40:21.730203 kubelet[2827]: E0420 19:40:21.725447 2827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:40:21.755468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:40:21.758029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:40:21.795211 systemd[1]: kubelet.service: Consumed 9.958s CPU time, 111M memory peak. Apr 20 19:40:31.887680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 35. Apr 20 19:40:32.045736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:40:38.917676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:40:39.094906 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:40:47.255429 kubelet[2843]: E0420 19:40:47.244494 2843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:40:47.269927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:40:47.270183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:40:47.272957 systemd[1]: kubelet.service: Consumed 8.833s CPU time, 111.2M memory peak. Apr 20 19:40:57.543893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 36. Apr 20 19:40:57.672287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:41:04.426986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:41:04.489175 (kubelet)[2868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:41:11.176285 kubelet[2868]: E0420 19:41:11.170585 2868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:41:11.203142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:41:11.204167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:41:11.297310 systemd[1]: kubelet.service: Consumed 6.636s CPU time, 110.7M memory peak. Apr 20 19:41:21.287516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 37. Apr 20 19:41:21.411804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:41:28.041967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:41:28.082371 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:41:31.478538 kubelet[2918]: E0420 19:41:31.477519 2918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:41:31.486601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:41:31.486797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:41:31.489366 systemd[1]: kubelet.service: Consumed 5.947s CPU time, 110.8M memory peak. Apr 20 19:41:41.535006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 38. Apr 20 19:41:41.646465 containerd[1634]: time="2026-04-20T19:41:41.639786132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:41:41.649741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:41:42.300192 containerd[1634]: time="2026-04-20T19:41:42.287737970Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23621099" Apr 20 19:41:42.664376 containerd[1634]: time="2026-04-20T19:41:42.661993715Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:41:45.466711 containerd[1634]: time="2026-04-20T19:41:45.464916527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:41:46.094800 containerd[1634]: time="2026-04-20T19:41:46.091593551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2m34.494377793s" Apr 20 19:41:46.101663 containerd[1634]: time="2026-04-20T19:41:46.100220407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 20 19:41:49.547986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:41:49.863021 (kubelet)[2944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:42:00.975637 kubelet[2944]: E0420 19:42:00.965737 2944 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:42:01.055369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:42:01.055703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:42:01.061532 systemd[1]: kubelet.service: Consumed 9.321s CPU time, 110.6M memory peak. Apr 20 19:42:11.551740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 39. Apr 20 19:42:11.750383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:42:18.654388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:42:18.726644 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:42:29.100590 kubelet[2964]: E0420 19:42:29.094632 2964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:42:29.222940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:42:29.231960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:42:29.262971 systemd[1]: kubelet.service: Consumed 10.016s CPU time, 110.6M memory peak. Apr 20 19:42:39.377604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 40. Apr 20 19:42:39.584928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:42:49.392911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:42:49.627413 (kubelet)[2997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:43:01.863330 kubelet[2997]: E0420 19:43:01.849844 2997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:43:01.973303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:43:01.981008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:43:01.999372 systemd[1]: kubelet.service: Consumed 11.141s CPU time, 109.4M memory peak. Apr 20 19:43:12.075776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 41. Apr 20 19:43:12.192287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:43:25.229998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:43:25.684813 (kubelet)[3016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:43:37.981706 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:43:38.044781 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 19:43:38.055994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:43:38.202673 systemd[1]: kubelet.service: Consumed 11.530s CPU time, 110.8M memory peak. Apr 20 19:43:38.746561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:43:42.116655 systemd[1]: Reload requested from client PID 3034 ('systemctl') (unit session-6.scope)... Apr 20 19:43:42.118562 systemd[1]: Reloading... Apr 20 19:43:56.471705 systemd-ssh-generator[3080]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:43:56.518364 zram_generator::config[3088]: No configuration found. Apr 20 19:43:56.531394 (sd-exec-[3065]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:44:25.975967 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:44:40.601450 systemd[1]: Reloading finished in 58466 ms. Apr 20 19:44:45.017487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:44:45.337007 (kubelet)[3137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 19:44:46.389035 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:44:46.531662 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 19:44:46.552874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:44:46.558862 systemd[1]: kubelet.service: Consumed 5.752s CPU time, 100.5M memory peak. Apr 20 19:44:47.153907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:45:00.239708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:45:00.535037 (kubelet)[3155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 19:45:27.149970 kubelet[3155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:45:31.934500 kubelet[3155]: I0420 19:45:31.929457 3155 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 19:45:31.948585 kubelet[3155]: I0420 19:45:31.936623 3155 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 19:45:31.948585 kubelet[3155]: I0420 19:45:31.942579 3155 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 19:45:31.948585 kubelet[3155]: I0420 19:45:31.942993 3155 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 19:45:32.081318 kubelet[3155]: I0420 19:45:32.077801 3155 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 19:45:33.851894 kubelet[3155]: E0420 19:45:33.846031 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:45:34.680891 kubelet[3155]: I0420 19:45:34.680559 3155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 19:45:36.556861 kubelet[3155]: E0420 19:45:36.554227 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:45:38.061672 kubelet[3155]: I0420 19:45:37.992011 3155 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 19:45:40.960212 kubelet[3155]: E0420 19:45:40.956894 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:45:41.865577 kubelet[3155]: I0420 19:45:41.863936 3155 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 19:45:42.241759 kubelet[3155]: I0420 19:45:42.232701 3155 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 19:45:42.399025 kubelet[3155]: I0420 19:45:42.241624 3155 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 19:45:42.418285 kubelet[3155]: I0420 19:45:42.410459 3155 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 19:45:42.418285 kubelet[3155]: I0420 19:45:42.414322 3155 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 19:45:42.428965 kubelet[3155]: I0420 19:45:42.428541 3155 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 19:45:42.781274 kubelet[3155]: I0420 19:45:42.778445 3155 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 19:45:42.887458 kubelet[3155]: I0420 19:45:42.886242 3155 kubelet.go:482] "Attempting to sync node with API server" Apr 20 19:45:42.900180 kubelet[3155]: I0420 19:45:42.898763 3155 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 19:45:42.914535 kubelet[3155]: I0420 19:45:42.913521 3155 kubelet.go:394] "Adding apiserver pod source" Apr 20 19:45:42.938512 kubelet[3155]: I0420 19:45:42.933458 3155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 19:45:43.750135 kubelet[3155]: I0420 19:45:43.747676 3155 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 19:45:44.146351 kubelet[3155]: I0420 19:45:44.145195 3155 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 19:45:44.156914 kubelet[3155]: I0420 19:45:44.149366 3155 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 19:45:44.221395 kubelet[3155]: W0420 19:45:44.168571 3155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 19:45:44.940028 kubelet[3155]: I0420 19:45:44.939615 3155 server.go:1257] "Started kubelet" Apr 20 19:45:44.962948 kubelet[3155]: I0420 19:45:44.960723 3155 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 19:45:45.067418 kubelet[3155]: I0420 19:45:44.957960 3155 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 19:45:45.185528 kubelet[3155]: I0420 19:45:45.182663 3155 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 19:45:45.342963 kubelet[3155]: E0420 19:45:45.221963 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:45:45.375865 kubelet[3155]: I0420 19:45:45.375640 3155 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 19:45:45.517130 kubelet[3155]: I0420 19:45:45.516855 3155 server.go:317] "Adding debug handlers to kubelet server" Apr 20 19:45:45.567358 kubelet[3155]: I0420 19:45:45.555476 3155 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 19:45:45.579300 kubelet[3155]: I0420 19:45:45.578118 3155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 19:45:45.734645 kubelet[3155]: I0420 19:45:45.732907 3155 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 19:45:45.898196 kubelet[3155]: I0420 19:45:45.862858 3155 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 19:45:46.182410 kubelet[3155]: E0420 19:45:46.029736 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:46.673298 kubelet[3155]: E0420 19:45:46.672660 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:46.709399 kubelet[3155]: I0420 19:45:46.545638 3155 reconciler.go:29] "Reconciler: start to sync state" Apr 20 19:45:46.740235 kubelet[3155]: E0420 19:45:46.727025 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Apr 20 19:45:47.166476 kubelet[3155]: E0420 19:45:47.030375 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:47.714227 kubelet[3155]: E0420 19:45:47.686215 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:47.747158 kubelet[3155]: E0420 19:45:47.745973 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Apr 20 19:45:47.748928 kubelet[3155]: I0420 19:45:47.746500 3155 factory.go:223] Registration of the systemd container factory successfully Apr 20 19:45:47.752990 kubelet[3155]: I0420 19:45:47.752447 3155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 19:45:47.936116 kubelet[3155]: I0420 19:45:47.904983 3155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 19:45:47.955563 kubelet[3155]: E0420 19:45:47.941558 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.153898 kubelet[3155]: E0420 19:45:48.153368 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.274672 kubelet[3155]: E0420 19:45:48.271445 3155 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 19:45:48.324312 kubelet[3155]: E0420 19:45:48.323795 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.414499 kubelet[3155]: W0420 19:45:48.411178 3155 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: failed to write client preface: write unix @->/run/containerd/containerd.sock: use of closed network connection" Apr 20 19:45:48.474634 kubelet[3155]: E0420 19:45:48.431714 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Apr 20 19:45:48.523611 kubelet[3155]: E0420 19:45:48.520498 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.719251 kubelet[3155]: E0420 19:45:48.712424 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.857436 kubelet[3155]: E0420 19:45:48.842582 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:48.967163 kubelet[3155]: E0420 19:45:48.965681 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:49.173426 kubelet[3155]: E0420 19:45:49.149803 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:49.490364 kubelet[3155]: W0420 19:45:49.480210 3155 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 20 19:45:49.625159 kubelet[3155]: E0420 19:45:49.436033 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:49.682770 kubelet[3155]: I0420 19:45:49.611337 3155 factory.go:221] Registration of the containerd container factory failed: failed to fetch containerd client version: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 20 19:45:49.826322 kubelet[3155]: E0420 19:45:49.825928 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:49.882242 kubelet[3155]: E0420 19:45:49.880682 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Apr 20 19:45:49.964393 kubelet[3155]: E0420 19:45:49.963876 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.109533 kubelet[3155]: E0420 19:45:50.104028 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.144342 kubelet[3155]: E0420 19:45:50.142873 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:45:50.313492 kubelet[3155]: E0420 19:45:50.295038 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.440475 kubelet[3155]: E0420 19:45:50.438857 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.600474 kubelet[3155]: E0420 19:45:50.599783 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.799198 kubelet[3155]: E0420 19:45:50.797576 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:50.814331 kubelet[3155]: I0420 19:45:50.812627 3155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 19:45:50.818205 kubelet[3155]: I0420 19:45:50.815361 3155 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 19:45:50.828851 kubelet[3155]: I0420 19:45:50.828376 3155 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 19:45:50.855025 kubelet[3155]: E0420 19:45:50.847757 3155 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:45:51.096926 kubelet[3155]: E0420 19:45:51.089152 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:51.147399 kubelet[3155]: E0420 19:45:51.129980 3155 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:45:51.322757 kubelet[3155]: E0420 19:45:51.322400 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:51.440138 kubelet[3155]: E0420 19:45:51.436963 3155 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:45:51.560820 kubelet[3155]: E0420 19:45:51.449749 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:51.661257 kubelet[3155]: E0420 19:45:51.658805 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="3.2s" Apr 20 19:45:51.677316 kubelet[3155]: E0420 19:45:51.656547 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:45:51.685843 kubelet[3155]: E0420 19:45:51.680534 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:51.806547 kubelet[3155]: E0420 19:45:51.805581 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:51.872430 kubelet[3155]: E0420 19:45:51.868761 3155 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:45:51.951899 kubelet[3155]: E0420 19:45:51.949302 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.065817 kubelet[3155]: E0420 19:45:52.062643 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.197537 kubelet[3155]: E0420 19:45:52.193933 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.357132 kubelet[3155]: E0420 19:45:52.352646 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.467248 kubelet[3155]: E0420 19:45:52.466004 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.602742 kubelet[3155]: E0420 19:45:52.601038 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.719744 kubelet[3155]: E0420 19:45:52.718472 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.725413 kubelet[3155]: E0420 19:45:52.718453 3155 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:45:52.823176 kubelet[3155]: E0420 19:45:52.822402 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:52.968923 kubelet[3155]: E0420 19:45:52.966508 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.083319 kubelet[3155]: E0420 19:45:53.080677 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.202339 kubelet[3155]: E0420 19:45:53.201301 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.357727 kubelet[3155]: E0420 19:45:53.353718 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.529867 kubelet[3155]: E0420 19:45:53.526970 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.668308 kubelet[3155]: E0420 19:45:53.666947 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.778206 kubelet[3155]: E0420 19:45:53.775336 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:53.931316 kubelet[3155]: E0420 19:45:53.929022 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.099189 kubelet[3155]: E0420 19:45:54.089605 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.215264 kubelet[3155]: E0420 19:45:54.211955 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.332430 kubelet[3155]: E0420 19:45:54.330637 3155 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:45:54.339906 kubelet[3155]: E0420 19:45:54.336008 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.475358 kubelet[3155]: E0420 19:45:54.464992 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.590827 kubelet[3155]: E0420 19:45:54.589704 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.747427 kubelet[3155]: E0420 19:45:54.738372 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:54.924869 kubelet[3155]: E0420 19:45:54.892463 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.076624 kubelet[3155]: E0420 19:45:55.072183 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.132142 kubelet[3155]: E0420 19:45:55.126784 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="6.4s" Apr 20 19:45:55.269948 kubelet[3155]: E0420 19:45:55.261829 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.393615 kubelet[3155]: E0420 19:45:55.382355 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.509452 kubelet[3155]: E0420 19:45:55.504846 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.637226 kubelet[3155]: E0420 19:45:55.635396 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.778268 kubelet[3155]: E0420 19:45:55.760140 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:55.923214 kubelet[3155]: E0420 19:45:55.896892 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.110293 kubelet[3155]: E0420 19:45:56.063906 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.249250 kubelet[3155]: E0420 19:45:56.247214 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.397551 kubelet[3155]: E0420 19:45:56.367308 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.566176 kubelet[3155]: E0420 19:45:56.561318 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.654178 kubelet[3155]: I0420 19:45:56.648190 3155 cpu_manager.go:225] "Starting" policy="none" Apr 20 19:45:56.675632 kubelet[3155]: I0420 19:45:56.673118 3155 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 19:45:56.684131 kubelet[3155]: I0420 19:45:56.680965 3155 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 19:45:56.718079 kubelet[3155]: E0420 19:45:56.684171 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.751428 kubelet[3155]: I0420 19:45:56.743958 3155 policy_none.go:50] "Start" Apr 20 19:45:56.762711 kubelet[3155]: I0420 19:45:56.759649 3155 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 19:45:56.780280 kubelet[3155]: I0420 19:45:56.778586 3155 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 19:45:56.798137 kubelet[3155]: E0420 19:45:56.795884 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:56.911988 kubelet[3155]: I0420 19:45:56.898599 3155 policy_none.go:44] "Start" Apr 20 19:45:56.911988 kubelet[3155]: E0420 19:45:56.908783 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.035864 kubelet[3155]: E0420 19:45:57.035521 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.141685 kubelet[3155]: E0420 19:45:57.140993 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.278187 kubelet[3155]: E0420 19:45:57.276842 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.386368 kubelet[3155]: E0420 19:45:57.385535 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.489552 kubelet[3155]: E0420 19:45:57.488843 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.570421 kubelet[3155]: E0420 19:45:57.558614 3155 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:45:57.620939 kubelet[3155]: E0420 19:45:57.620222 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.729972 kubelet[3155]: E0420 19:45:57.729648 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.781855 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 19:45:57.851001 kubelet[3155]: E0420 19:45:57.849125 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:57.965932 kubelet[3155]: E0420 19:45:57.963241 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.124243 kubelet[3155]: E0420 19:45:58.114012 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.265005 kubelet[3155]: E0420 19:45:58.263155 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.400360 kubelet[3155]: E0420 19:45:58.397731 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.583430 kubelet[3155]: E0420 19:45:58.576902 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.768790 kubelet[3155]: E0420 19:45:58.695755 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:58.967599 kubelet[3155]: E0420 19:45:58.966736 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:59.176721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 19:45:59.199182 kubelet[3155]: E0420 19:45:59.189968 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:59.343543 kubelet[3155]: E0420 19:45:59.337387 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:59.575351 kubelet[3155]: E0420 19:45:59.575146 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:59.789925 kubelet[3155]: E0420 19:45:59.783275 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:45:59.967809 kubelet[3155]: E0420 19:45:59.953862 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:00.171620 kubelet[3155]: E0420 19:46:00.166787 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:00.343233 kubelet[3155]: E0420 19:46:00.341740 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:00.444036 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 19:46:00.675203 kubelet[3155]: E0420 19:46:00.673355 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:00.799701 kubelet[3155]: E0420 19:46:00.797780 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:00.848248 kubelet[3155]: E0420 19:46:00.848006 3155 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 19:46:00.881401 kubelet[3155]: I0420 19:46:00.879784 3155 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 19:46:00.890585 kubelet[3155]: I0420 19:46:00.882200 3155 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 19:46:00.940040 kubelet[3155]: I0420 19:46:00.936364 3155 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 19:46:00.955247 kubelet[3155]: E0420 19:46:00.953908 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:01.100174 kubelet[3155]: E0420 19:46:01.088958 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:01.247163 kubelet[3155]: E0420 19:46:01.242953 3155 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:46:01.357465 kubelet[3155]: E0420 19:46:01.356902 3155 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 19:46:01.367677 kubelet[3155]: E0420 19:46:01.367453 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:01.653501 kubelet[3155]: I0420 19:46:01.650310 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:01.658625 kubelet[3155]: E0420 19:46:01.655507 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:01.799327 kubelet[3155]: E0420 19:46:01.792927 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:01.837174 kubelet[3155]: E0420 19:46:01.799694 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:02.325178 kubelet[3155]: I0420 19:46:02.322990 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:02.355120 kubelet[3155]: E0420 19:46:02.354508 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:02.789399 kubelet[3155]: I0420 19:46:02.787978 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:46:02.800322 kubelet[3155]: I0420 19:46:02.798596 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:46:02.819260 kubelet[3155]: I0420 19:46:02.818499 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:46:02.859813 kubelet[3155]: I0420 19:46:02.858746 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:02.893644 kubelet[3155]: E0420 19:46:02.891946 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:02.951675 kubelet[3155]: I0420 19:46:02.950734 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:46:02.964365 kubelet[3155]: I0420 19:46:02.960170 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:46:02.990466 kubelet[3155]: I0420 19:46:02.986736 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:46:03.033227 kubelet[3155]: I0420 19:46:03.031888 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:46:03.316700 kubelet[3155]: I0420 19:46:03.314796 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:46:03.447282 kubelet[3155]: I0420 19:46:03.443236 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c463bc49d886414af4d8b2e5922b9f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f8c463bc49d886414af4d8b2e5922b9f\") " pod="kube-system/kube-scheduler-localhost" Apr 20 19:46:03.923270 kubelet[3155]: I0420 19:46:03.911795 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:04.058737 kubelet[3155]: E0420 19:46:04.054938 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:04.103809 systemd[1]: Created slice kubepods-burstable-pod0882dc0d06df2a57de7e97bbe10d0631.slice - libcontainer container kubepods-burstable-pod0882dc0d06df2a57de7e97bbe10d0631.slice. Apr 20 19:46:04.393678 kubelet[3155]: E0420 19:46:04.392810 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:46:04.549134 kubelet[3155]: E0420 19:46:04.548471 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:04.797500 systemd[1]: Created slice kubepods-burstable-pod59dc6bef4fa0beb64c871485aab08cdf.slice - libcontainer container kubepods-burstable-pod59dc6bef4fa0beb64c871485aab08cdf.slice. Apr 20 19:46:05.574631 kubelet[3155]: E0420 19:46:05.574106 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:46:05.587889 containerd[1634]: time="2026-04-20T19:46:05.584269512Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"0882dc0d06df2a57de7e97bbe10d0631\" namespace:\"kube-system\"" Apr 20 19:46:05.849768 systemd[1]: Created slice kubepods-burstable-podf8c463bc49d886414af4d8b2e5922b9f.slice - libcontainer container kubepods-burstable-podf8c463bc49d886414af4d8b2e5922b9f.slice. Apr 20 19:46:05.887825 kubelet[3155]: E0420 19:46:05.885882 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:06.155362 containerd[1634]: time="2026-04-20T19:46:06.151482535Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"59dc6bef4fa0beb64c871485aab08cdf\" namespace:\"kube-system\"" Apr 20 19:46:06.211878 kubelet[3155]: E0420 19:46:06.210735 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:46:06.564577 kubelet[3155]: I0420 19:46:06.564168 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:06.719460 kubelet[3155]: E0420 19:46:06.714364 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:06.758243 kubelet[3155]: E0420 19:46:06.755420 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:06.758243 kubelet[3155]: E0420 19:46:06.755449 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:46:06.775420 kubelet[3155]: E0420 19:46:06.762748 3155 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:46:06.871482 containerd[1634]: time="2026-04-20T19:46:06.870830757Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f8c463bc49d886414af4d8b2e5922b9f\" namespace:\"kube-system\"" Apr 20 19:46:08.438222 containerd[1634]: time="2026-04-20T19:46:08.437141867Z" level=info msg="connecting to shim 023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce" address="unix:///run/containerd/s/391cf9bf55e04de8e45b690b8088eed9410842836a0833df7de036e9b45471e5" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:46:08.480772 containerd[1634]: time="2026-04-20T19:46:08.480432066Z" level=info msg="connecting to shim 34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457" address="unix:///run/containerd/s/988811282899b1fbab4a706c9a3394190ecd3ffce67f19c8af35aad96ae9279a" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:46:08.888766 kubelet[3155]: E0420 19:46:08.754138 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:09.756954 containerd[1634]: time="2026-04-20T19:46:09.746607881Z" level=info msg="connecting to shim 0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214" address="unix:///run/containerd/s/ed7d8c520f12ac1fb47a8aa71220272282162638ec4a368bb2c465689728ccc8" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:46:10.559783 systemd[1]: Started cri-containerd-023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce.scope - libcontainer container 023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce. Apr 20 19:46:10.620766 kubelet[3155]: I0420 19:46:10.608858 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:10.766642 kubelet[3155]: E0420 19:46:10.758987 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:11.673228 kubelet[3155]: E0420 19:46:11.612851 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:12.164255 kubelet[3155]: E0420 19:46:12.140890 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:12.526704 systemd[1]: Started cri-containerd-0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214.scope - libcontainer container 0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214. Apr 20 19:46:12.653729 systemd[1]: Started cri-containerd-34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457.scope - libcontainer container 34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457. Apr 20 19:46:15.544246 containerd[1634]: time="2026-04-20T19:46:15.519618312Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"59dc6bef4fa0beb64c871485aab08cdf\" namespace:\"kube-system\" returns sandbox id \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\"" Apr 20 19:46:16.126366 kubelet[3155]: E0420 19:46:16.105180 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:16.268860 containerd[1634]: time="2026-04-20T19:46:16.268372615Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f8c463bc49d886414af4d8b2e5922b9f\" namespace:\"kube-system\" returns sandbox id \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\"" Apr 20 19:46:16.344399 kubelet[3155]: E0420 19:46:16.339983 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:16.394944 containerd[1634]: time="2026-04-20T19:46:16.391196062Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"0882dc0d06df2a57de7e97bbe10d0631\" namespace:\"kube-system\" returns sandbox id \"34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457\"" Apr 20 19:46:16.614929 kubelet[3155]: E0420 19:46:16.613549 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:16.813148 kubelet[3155]: E0420 19:46:16.812528 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:17.845342 kubelet[3155]: I0420 19:46:17.844667 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:18.347256 kubelet[3155]: E0420 19:46:18.342030 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:18.986172 containerd[1634]: time="2026-04-20T19:46:18.985309477Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for container name:\"kube-scheduler\"" Apr 20 19:46:19.012371 containerd[1634]: time="2026-04-20T19:46:19.003335130Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for container name:\"kube-controller-manager\"" Apr 20 19:46:19.039508 containerd[1634]: time="2026-04-20T19:46:19.025225398Z" level=info msg="CreateContainer within sandbox \"34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457\" for container name:\"kube-apiserver\"" Apr 20 19:46:20.587804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316990564.mount: Deactivated successfully. Apr 20 19:46:20.960378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68497997.mount: Deactivated successfully. Apr 20 19:46:20.978935 containerd[1634]: time="2026-04-20T19:46:20.977482945Z" level=info msg="Container 27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:46:21.355201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483987248.mount: Deactivated successfully. Apr 20 19:46:21.765539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892530327.mount: Deactivated successfully. Apr 20 19:46:21.812172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952718256.mount: Deactivated successfully. Apr 20 19:46:21.821651 containerd[1634]: time="2026-04-20T19:46:21.821008413Z" level=info msg="Container ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:46:21.899138 kubelet[3155]: E0420 19:46:21.869826 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:22.284963 containerd[1634]: time="2026-04-20T19:46:22.282443199Z" level=info msg="Container 6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:46:22.326721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724438360.mount: Deactivated successfully. Apr 20 19:46:22.410838 kubelet[3155]: E0420 19:46:22.353173 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:23.328671 kubelet[3155]: E0420 19:46:23.326279 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:24.080656 containerd[1634]: time="2026-04-20T19:46:24.076588776Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for name:\"kube-controller-manager\" returns container id \"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\"" Apr 20 19:46:24.133280 containerd[1634]: time="2026-04-20T19:46:24.077205408Z" level=info msg="CreateContainer within sandbox \"34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457\" for name:\"kube-apiserver\" returns container id \"6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03\"" Apr 20 19:46:24.443157 containerd[1634]: time="2026-04-20T19:46:24.439522822Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for name:\"kube-scheduler\" returns container id \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\"" Apr 20 19:46:24.580871 containerd[1634]: time="2026-04-20T19:46:24.578485968Z" level=info msg="StartContainer for \"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\"" Apr 20 19:46:24.676935 containerd[1634]: time="2026-04-20T19:46:24.578746193Z" level=info msg="StartContainer for \"6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03\"" Apr 20 19:46:24.961315 containerd[1634]: time="2026-04-20T19:46:24.942153381Z" level=info msg="StartContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\"" Apr 20 19:46:25.353214 containerd[1634]: time="2026-04-20T19:46:25.340839351Z" level=info msg="connecting to shim ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299" address="unix:///run/containerd/s/391cf9bf55e04de8e45b690b8088eed9410842836a0833df7de036e9b45471e5" protocol=ttrpc version=3 Apr 20 19:46:25.916884 containerd[1634]: time="2026-04-20T19:46:25.885913653Z" level=info msg="connecting to shim 6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03" address="unix:///run/containerd/s/988811282899b1fbab4a706c9a3394190ecd3ffce67f19c8af35aad96ae9279a" protocol=ttrpc version=3 Apr 20 19:46:25.937314 kubelet[3155]: I0420 19:46:25.934958 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:26.277161 kubelet[3155]: E0420 19:46:26.157975 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:26.380531 containerd[1634]: time="2026-04-20T19:46:26.380357446Z" level=info msg="connecting to shim 27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" address="unix:///run/containerd/s/ed7d8c520f12ac1fb47a8aa71220272282162638ec4a368bb2c465689728ccc8" protocol=ttrpc version=3 Apr 20 19:46:27.681931 systemd[1]: Started cri-containerd-ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299.scope - libcontainer container ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299. Apr 20 19:46:31.133480 kubelet[3155]: E0420 19:46:31.133094 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:31.877174 systemd[1]: Started cri-containerd-27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438.scope - libcontainer container 27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438. Apr 20 19:46:32.325299 kubelet[3155]: E0420 19:46:32.313953 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:33.402141 kubelet[3155]: E0420 19:46:33.387027 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:33.578836 systemd[1]: Started cri-containerd-6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03.scope - libcontainer container 6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03. Apr 20 19:46:33.977625 kubelet[3155]: I0420 19:46:33.976778 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:34.953979 kubelet[3155]: E0420 19:46:34.942971 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:37.999898 containerd[1634]: time="2026-04-20T19:46:37.934936763Z" level=info msg="StartContainer for \"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\" returns successfully" Apr 20 19:46:39.662398 kubelet[3155]: E0420 19:46:39.632564 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:39.950249 kubelet[3155]: E0420 19:46:39.932772 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:46:42.934403 kubelet[3155]: E0420 19:46:42.928974 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:48.449992 containerd[1634]: time="2026-04-20T19:46:48.442827031Z" level=info msg="StartContainer for \"6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03\" returns successfully" Apr 20 19:46:48.449992 containerd[1634]: time="2026-04-20T19:46:48.449127479Z" level=info msg="StartContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" returns successfully" Apr 20 19:46:50.775688 kubelet[3155]: I0420 19:46:50.773690 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:46:50.842290 kubelet[3155]: E0420 19:46:50.393840 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:51.553213 kubelet[3155]: E0420 19:46:51.455552 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:51.966790 kubelet[3155]: E0420 19:46:51.899523 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:46:52.979208 kubelet[3155]: E0420 19:46:52.965017 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:46:54.670909 kubelet[3155]: E0420 19:46:54.665663 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:46:54.876885 kubelet[3155]: E0420 19:46:54.873328 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:46:59.086377 kubelet[3155]: E0420 19:46:59.051721 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:46:59.963386 kubelet[3155]: I0420 19:46:59.962931 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:47:00.022460 kubelet[3155]: E0420 19:47:00.021906 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:00.074505 kubelet[3155]: E0420 19:47:00.071236 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:00.194812 kubelet[3155]: E0420 19:47:00.182802 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:47:00.970512 kubelet[3155]: E0420 19:47:00.943558 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:47:01.921599 kubelet[3155]: E0420 19:47:01.920816 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:02.089993 kubelet[3155]: E0420 19:47:02.073359 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:03.132379 kubelet[3155]: E0420 19:47:03.128620 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:05.379961 kubelet[3155]: E0420 19:47:05.379576 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:05.420991 kubelet[3155]: E0420 19:47:05.420588 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:05.862216 kubelet[3155]: E0420 19:47:05.861624 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:05.978319 kubelet[3155]: E0420 19:47:05.977687 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:06.082647 kubelet[3155]: E0420 19:47:06.080465 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:06.641864 kubelet[3155]: E0420 19:47:06.640717 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:47:06.769002 kubelet[3155]: E0420 19:47:06.765792 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:07.687877 kubelet[3155]: I0420 19:47:07.687213 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:47:07.837512 kubelet[3155]: E0420 19:47:07.834683 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:47:09.558855 kubelet[3155]: E0420 19:47:09.556892 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:09.925079 kubelet[3155]: E0420 19:47:09.888951 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:10.077927 kubelet[3155]: E0420 19:47:10.050525 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:10.245784 kubelet[3155]: E0420 19:47:10.244780 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:10.276936 kubelet[3155]: E0420 19:47:10.274258 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:10.561350 kubelet[3155]: E0420 19:47:10.554989 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:11.371357 kubelet[3155]: E0420 19:47:11.367981 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:47:11.464153 kubelet[3155]: E0420 19:47:11.462497 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:47:13.142652 kubelet[3155]: E0420 19:47:13.137899 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:13.337132 kubelet[3155]: E0420 19:47:13.336943 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:47:13.391604 kubelet[3155]: E0420 19:47:13.389476 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:47:14.179023 kubelet[3155]: E0420 19:47:14.088967 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 20 19:47:15.782200 kubelet[3155]: I0420 19:47:15.781874 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:47:15.929556 kubelet[3155]: E0420 19:47:15.927223 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 20 19:47:23.298443 kubelet[3155]: E0420 19:47:23.290777 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:24.200203 kubelet[3155]: I0420 19:47:24.199968 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:47:31.296572 kubelet[3155]: E0420 19:47:31.292689 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:47:31.452654 kubelet[3155]: E0420 19:47:31.432581 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:47:33.332303 kubelet[3155]: E0420 19:47:33.331104 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:34.485014 kubelet[3155]: E0420 19:47:34.483167 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:47:42.159482 kubelet[3155]: I0420 19:47:42.158480 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:47:43.355107 kubelet[3155]: E0420 19:47:43.350795 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:48.385160 kubelet[3155]: E0420 19:47:48.381763 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:47:51.907465 kubelet[3155]: E0420 19:47:51.749564 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:47:52.363923 kubelet[3155]: E0420 19:47:52.340930 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:47:53.424449 kubelet[3155]: E0420 19:47:53.422618 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:47:53.446772 kubelet[3155]: E0420 19:47:53.444445 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:48:00.116994 kubelet[3155]: I0420 19:48:00.111571 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:48:03.437813 kubelet[3155]: E0420 19:48:03.435804 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:05.641462 kubelet[3155]: E0420 19:48:05.632523 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:48:10.395089 kubelet[3155]: E0420 19:48:10.393330 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:48:12.057231 kubelet[3155]: E0420 19:48:12.040968 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:12.080258 kubelet[3155]: E0420 19:48:12.060532 3155 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a8284884dcc28b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,LastTimestamp:2026-04-20 19:45:44.929477259 +0000 UTC m=+43.907837399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:13.452178 kubelet[3155]: E0420 19:48:13.451467 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:18.054641 kubelet[3155]: E0420 19:48:18.044595 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:48:18.166298 kubelet[3155]: I0420 19:48:18.164607 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:48:18.225776 kubelet[3155]: E0420 19:48:18.212630 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:48:19.628242 kubelet[3155]: E0420 19:48:19.616674 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:48:19.833284 kubelet[3155]: E0420 19:48:19.828531 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:48:22.243128 kubelet[3155]: E0420 19:48:22.229813 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:22.836000 kubelet[3155]: E0420 19:48:22.832626 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:48:23.495257 kubelet[3155]: E0420 19:48:23.494627 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:25.037302 kubelet[3155]: E0420 19:48:25.020714 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:48:28.383398 kubelet[3155]: E0420 19:48:28.379807 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:48:33.527666 kubelet[3155]: E0420 19:48:33.527232 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:35.574655 kubelet[3155]: I0420 19:48:35.574258 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:48:35.683565 kubelet[3155]: E0420 19:48:35.638621 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:39.946483 kubelet[3155]: E0420 19:48:39.946190 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:48:41.442405 kubelet[3155]: E0420 19:48:41.418760 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:48:41.552845 kubelet[3155]: E0420 19:48:41.495793 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:48:43.680903 kubelet[3155]: E0420 19:48:43.680127 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:45.743186 kubelet[3155]: E0420 19:48:45.680797 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:48:53.455389 kubelet[3155]: I0420 19:48:53.452875 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:48:53.740944 kubelet[3155]: E0420 19:48:53.724897 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:48:56.140988 kubelet[3155]: E0420 19:48:56.124638 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:57.147357 kubelet[3155]: E0420 19:48:57.139336 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:48:58.530251 kubelet[3155]: E0420 19:48:58.463476 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:49:03.642558 kubelet[3155]: E0420 19:49:03.632704 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:49:03.851416 kubelet[3155]: E0420 19:49:03.843632 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:12.350714 kubelet[3155]: I0420 19:49:12.327633 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:49:13.887696 kubelet[3155]: E0420 19:49:13.885730 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:14.246412 kubelet[3155]: E0420 19:49:14.227566 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:49:16.266344 kubelet[3155]: E0420 19:49:16.249518 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:49:22.591033 kubelet[3155]: E0420 19:49:22.586399 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:49:23.979618 kubelet[3155]: E0420 19:49:23.975773 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:28.720710 kubelet[3155]: E0420 19:49:28.720518 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:49:28.928585 kubelet[3155]: E0420 19:49:28.926921 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:49:29.829017 kubelet[3155]: E0420 19:49:29.823955 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:49:30.672250 kubelet[3155]: I0420 19:49:30.668284 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:49:31.634262 kubelet[3155]: E0420 19:49:31.597674 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:49:34.076586 kubelet[3155]: E0420 19:49:34.070976 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:35.539280 kubelet[3155]: E0420 19:49:35.535026 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:49:35.672195 kubelet[3155]: E0420 19:49:35.668004 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:49:36.777244 kubelet[3155]: E0420 19:49:36.766731 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:49:41.029613 kubelet[3155]: E0420 19:49:41.025930 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:49:44.255253 kubelet[3155]: E0420 19:49:44.245727 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:48.773366 kubelet[3155]: E0420 19:49:48.755401 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:49:49.594701 kubelet[3155]: I0420 19:49:49.593687 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:49:54.289201 kubelet[3155]: E0420 19:49:54.283907 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:49:56.996279 kubelet[3155]: E0420 19:49:56.983934 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:49:58.766719 kubelet[3155]: E0420 19:49:58.746281 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:49:58.835483 kubelet[3155]: E0420 19:49:58.833755 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:49:59.838775 kubelet[3155]: E0420 19:49:59.838151 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:50:02.325610 kubelet[3155]: E0420 19:50:02.319583 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:50:04.372355 kubelet[3155]: E0420 19:50:04.370030 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:05.997265 kubelet[3155]: E0420 19:50:05.992320 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:50:07.646304 kubelet[3155]: I0420 19:50:07.644672 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:50:14.444267 kubelet[3155]: E0420 19:50:14.434745 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:17.373241 kubelet[3155]: E0420 19:50:17.372353 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:50:17.967905 kubelet[3155]: E0420 19:50:17.966703 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:50:23.346958 kubelet[3155]: E0420 19:50:23.283917 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:50:24.481566 kubelet[3155]: E0420 19:50:24.480743 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:25.465274 kubelet[3155]: I0420 19:50:25.433820 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:50:34.439578 kubelet[3155]: E0420 19:50:34.437936 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:50:34.594259 kubelet[3155]: E0420 19:50:34.590890 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:34.988896 kubelet[3155]: E0420 19:50:34.985654 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:50:35.026357 kubelet[3155]: E0420 19:50:35.024655 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:50:36.169796 kubelet[3155]: E0420 19:50:36.166621 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:50:37.578258 kubelet[3155]: E0420 19:50:37.540979 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:50:40.844396 kubelet[3155]: E0420 19:50:40.838358 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:50:44.215358 kubelet[3155]: I0420 19:50:44.212814 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:50:44.680157 kubelet[3155]: E0420 19:50:44.678721 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:54.409568 kubelet[3155]: E0420 19:50:54.392281 3155 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:50:54.692643 kubelet[3155]: E0420 19:50:54.689645 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:50:57.860995 kubelet[3155]: E0420 19:50:57.860584 3155 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:50:57.993481 kubelet[3155]: E0420 19:50:57.989372 3155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:01.448416 kubelet[3155]: I0420 19:51:01.448231 3155 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:51:02.943755 kubelet[3155]: E0420 19:51:02.943571 3155 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:51:02.947323 kubelet[3155]: E0420 19:51:02.944222 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:04.748676 kubelet[3155]: E0420 19:51:04.747401 3155 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:51:04.981416 kubelet[3155]: E0420 19:51:04.980632 3155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:51:11.892663 kubelet[3155]: E0420 19:51:11.885636 3155 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 19:51:12.022183 kubelet[3155]: I0420 19:51:12.021413 3155 apiserver.go:52] "Watching apiserver" Apr 20 19:51:12.213599 kubelet[3155]: I0420 19:51:12.209638 3155 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 19:51:12.280843 kubelet[3155]: I0420 19:51:12.280387 3155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 19:51:12.443187 kubelet[3155]: I0420 19:51:12.400927 3155 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 19:51:12.552380 kubelet[3155]: E0420 19:51:12.542980 3155 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8284947b4ea5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,LastTimestamp:2026-04-20 19:45:48.19842313 +0000 UTC m=+47.176783241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:12.797718 kubelet[3155]: I0420 19:51:12.797344 3155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 19:51:12.849124 kubelet[3155]: E0420 19:51:12.846480 3155 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 20 19:51:12.849840 kubelet[3155]: E0420 19:51:12.849282 3155 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8284b1e21cc51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:45:56.090850385 +0000 UTC m=+55.069210494,LastTimestamp:2026-04-20 19:45:56.090850385 +0000 UTC m=+55.069210494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:12.974107 kubelet[3155]: E0420 19:51:12.973703 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:13.218322 kubelet[3155]: I0420 19:51:13.217825 3155 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 19:51:13.852434 kubelet[3155]: E0420 19:51:13.852163 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:13.857436 kubelet[3155]: E0420 19:51:13.852199 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:15.571802 containerd[1634]: time="2026-04-20T19:51:15.456830752Z" level=info msg="container event discarded" container=023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce type=CONTAINER_CREATED_EVENT Apr 20 19:51:15.747412 containerd[1634]: time="2026-04-20T19:51:15.725919709Z" level=info msg="container event discarded" container=023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce type=CONTAINER_STARTED_EVENT Apr 20 19:51:16.189307 kubelet[3155]: E0420 19:51:16.188507 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:16.376521 containerd[1634]: time="2026-04-20T19:51:16.284011421Z" level=info msg="container event discarded" container=0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214 type=CONTAINER_CREATED_EVENT Apr 20 19:51:16.376521 containerd[1634]: time="2026-04-20T19:51:16.370713079Z" level=info msg="container event discarded" container=0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214 type=CONTAINER_STARTED_EVENT Apr 20 19:51:16.419353 containerd[1634]: time="2026-04-20T19:51:16.416802703Z" level=info msg="container event discarded" container=34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457 type=CONTAINER_CREATED_EVENT Apr 20 19:51:16.428463 containerd[1634]: time="2026-04-20T19:51:16.424364636Z" level=info msg="container event discarded" container=34c405e01bd7703fe20a96a2e27372717ab0a60634061603d1a1e4f1fb2b4457 type=CONTAINER_STARTED_EVENT Apr 20 19:51:16.885558 kubelet[3155]: E0420 19:51:16.870780 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.021s" Apr 20 19:51:21.379877 kubelet[3155]: E0420 19:51:21.377606 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:23.393379 kubelet[3155]: I0420 19:51:23.390339 3155 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.388368036 podStartE2EDuration="10.388368036s" podCreationTimestamp="2026-04-20 19:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:51:23.374643419 +0000 UTC m=+382.353003519" watchObservedRunningTime="2026-04-20 19:51:23.388368036 +0000 UTC m=+382.366728177" Apr 20 19:51:23.674376 containerd[1634]: time="2026-04-20T19:51:23.651548294Z" level=info msg="container event discarded" container=6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03 type=CONTAINER_CREATED_EVENT Apr 20 19:51:23.751328 containerd[1634]: time="2026-04-20T19:51:23.717811393Z" level=info msg="container event discarded" container=ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299 type=CONTAINER_CREATED_EVENT Apr 20 19:51:23.894478 containerd[1634]: time="2026-04-20T19:51:23.844709843Z" level=info msg="container event discarded" container=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 type=CONTAINER_CREATED_EVENT Apr 20 19:51:26.169989 kubelet[3155]: I0420 19:51:26.169752 3155 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=14.169698103 podStartE2EDuration="14.169698103s" podCreationTimestamp="2026-04-20 19:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:51:24.432460457 +0000 UTC m=+383.410820559" watchObservedRunningTime="2026-04-20 19:51:26.169698103 +0000 UTC m=+385.148058210" Apr 20 19:51:26.644580 kubelet[3155]: E0420 19:51:26.592456 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:26.660373 kubelet[3155]: E0420 19:51:26.659166 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.79s" Apr 20 19:51:29.983320 kubelet[3155]: E0420 19:51:29.981811 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.127s" Apr 20 19:51:34.344804 kubelet[3155]: E0420 19:51:34.341837 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:37.190457 containerd[1634]: time="2026-04-20T19:51:37.045769384Z" level=info msg="container event discarded" container=ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299 type=CONTAINER_STARTED_EVENT Apr 20 19:51:39.924530 kubelet[3155]: E0420 19:51:39.870578 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.978s" Apr 20 19:51:40.973776 kubelet[3155]: E0420 19:51:40.973281 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:42.310349 kubelet[3155]: E0420 19:51:42.303005 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.359s" Apr 20 19:51:44.976709 containerd[1634]: time="2026-04-20T19:51:44.834746786Z" level=info msg="container event discarded" container=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 type=CONTAINER_STARTED_EVENT Apr 20 19:51:45.182002 containerd[1634]: time="2026-04-20T19:51:45.179674401Z" level=info msg="container event discarded" container=6db55d829698bd8e2f9475d45c0131e1d1a5705679e9bb8fab8a2d16420cbb03 type=CONTAINER_STARTED_EVENT Apr 20 19:51:45.255287 kubelet[3155]: E0420 19:51:45.248878 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.895s" Apr 20 19:51:46.444235 kubelet[3155]: E0420 19:51:46.436384 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:51:55.527414 kubelet[3155]: E0420 19:51:55.497876 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:00.189274 kubelet[3155]: E0420 19:51:59.980463 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.648s" Apr 20 19:52:02.680384 kubelet[3155]: E0420 19:52:02.323946 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:07.874540 kubelet[3155]: E0420 19:52:07.874232 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.487s" Apr 20 19:52:08.669541 kubelet[3155]: E0420 19:52:08.651511 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:12.097556 kubelet[3155]: E0420 19:52:12.096747 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.092s" Apr 20 19:52:13.925247 kubelet[3155]: E0420 19:52:13.924640 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:16.259225 kubelet[3155]: E0420 19:52:16.252750 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.363s" Apr 20 19:52:16.516507 kubelet[3155]: E0420 19:52:16.509287 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:52:19.829159 kubelet[3155]: E0420 19:52:19.828540 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:21.544603 kubelet[3155]: E0420 19:52:21.542640 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.617s" Apr 20 19:52:26.246397 kubelet[3155]: E0420 19:52:26.237703 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:30.457627 kubelet[3155]: E0420 19:52:30.421419 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.838s" Apr 20 19:52:32.531434 kubelet[3155]: E0420 19:52:32.530947 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:52:32.896572 kubelet[3155]: E0420 19:52:32.881850 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:34.326553 kubelet[3155]: E0420 19:52:34.326331 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.733s" Apr 20 19:52:37.298016 kubelet[3155]: E0420 19:52:37.276981 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.853s" Apr 20 19:52:39.927287 kubelet[3155]: E0420 19:52:39.809714 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:46.568624 kubelet[3155]: E0420 19:52:46.565601 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.228s" Apr 20 19:52:46.961573 kubelet[3155]: E0420 19:52:46.659991 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:49.412527 kubelet[3155]: E0420 19:52:49.079011 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:52:54.369463 kubelet[3155]: E0420 19:52:54.347994 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:52:58.316450 kubelet[3155]: E0420 19:52:58.287034 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.307s" Apr 20 19:53:06.836661 kubelet[3155]: E0420 19:53:06.827677 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:53:08.838851 kubelet[3155]: E0420 19:53:08.748557 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.148s" Apr 20 19:53:12.748854 kubelet[3155]: E0420 19:53:12.746801 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:53:14.469905 systemd[1]: cri-containerd-ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299.scope: Deactivated successfully. Apr 20 19:53:14.496018 systemd[1]: cri-containerd-ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299.scope: Consumed 38.101s CPU time, 21.4M memory peak. Apr 20 19:53:15.300902 containerd[1634]: time="2026-04-20T19:53:15.286708012Z" level=info msg="received container exit event container_id:\"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\" id:\"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\" pid:3361 exit_status:1 exited_at:{seconds:1776714794 nanos:796931194}" Apr 20 19:53:15.476469 kubelet[3155]: E0420 19:53:15.397896 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.35s" Apr 20 19:53:21.843680 kubelet[3155]: E0420 19:53:21.762702 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:53:24.279792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299-rootfs.mount: Deactivated successfully. Apr 20 19:53:26.191282 containerd[1634]: time="2026-04-20T19:53:26.089690906Z" level=error msg="failed to delete shim" error="close wait error: context deadline exceeded" id=ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299 Apr 20 19:53:29.433305 kubelet[3155]: E0420 19:53:29.369017 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.538s" Apr 20 19:53:30.529810 kubelet[3155]: E0420 19:53:30.524009 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:53:37.625284 kubelet[3155]: E0420 19:53:37.545803 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:53:49.145731 kubelet[3155]: E0420 19:53:49.132683 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.192s" Apr 20 19:53:50.322997 kubelet[3155]: E0420 19:53:48.893926 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:02.462960 kubelet[3155]: E0420 19:54:02.228932 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:11.060255 kubelet[3155]: E0420 19:54:11.046724 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:19.993671 kubelet[3155]: E0420 19:54:19.991749 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.742s" Apr 20 19:54:21.790535 kubelet[3155]: E0420 19:54:21.077029 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:26.379245 kubelet[3155]: E0420 19:54:26.375479 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:54:27.838834 kubelet[3155]: E0420 19:54:27.832810 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:54:29.201165 kubelet[3155]: I0420 19:54:29.198904 3155 scope.go:122] "RemoveContainer" containerID="ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299" Apr 20 19:54:31.732212 kubelet[3155]: E0420 19:54:31.726166 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:54:32.466916 kubelet[3155]: E0420 19:54:32.290016 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:38.553992 systemd[1]: cri-containerd-27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438.scope: Deactivated successfully. Apr 20 19:54:38.566926 systemd[1]: cri-containerd-27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438.scope: Consumed 1min 414ms CPU time, 24.7M memory peak. Apr 20 19:54:45.531365 kubelet[3155]: E0420 19:54:45.358740 3155 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:54:48.998610 containerd[1634]: time="2026-04-20T19:54:48.978411207Z" level=info msg="received container exit event container_id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" pid:3387 exit_status:1 exited_at:{seconds:1776714886 nanos:204773206}" Apr 20 19:54:51.803361 containerd[1634]: time="2026-04-20T19:54:51.590633881Z" level=error msg="post event" error="context deadline exceeded" Apr 20 19:54:51.803361 containerd[1634]: time="2026-04-20T19:54:51.661233225Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 20 19:54:55.816020 kubelet[3155]: E0420 19:54:55.815901 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:54:56.214786 kubelet[3155]: I0420 19:54:55.991039 3155 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=223.987384187 podStartE2EDuration="3m43.987384187s" podCreationTimestamp="2026-04-20 19:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:51:26.231796976 +0000 UTC m=+385.210157083" watchObservedRunningTime="2026-04-20 19:54:55.987384187 +0000 UTC m=+594.965744299" Apr 20 19:54:56.877190 containerd[1634]: time="2026-04-20T19:54:56.747477403Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 19:55:00.281845 containerd[1634]: time="2026-04-20T19:55:00.251876667Z" level=error msg="failed to delete task" error="context deadline exceeded" id=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 Apr 20 19:55:00.671506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438-rootfs.mount: Deactivated successfully. Apr 20 19:55:00.972193 containerd[1634]: time="2026-04-20T19:55:00.891770068Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 19:55:01.042619 containerd[1634]: time="2026-04-20T19:55:01.032158426Z" level=error msg="failed to handle container TaskExit event container_id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" pid:3387 exit_status:1 exited_at:{seconds:1776714886 nanos:204773206}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:55:02.598799 containerd[1634]: time="2026-04-20T19:55:02.553938782Z" level=info msg="TaskExit event container_id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" pid:3387 exit_status:1 exited_at:{seconds:1776714886 nanos:204773206}" Apr 20 19:55:03.779255 kubelet[3155]: E0420 19:55:03.660948 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="43.424s" Apr 20 19:55:04.245366 kubelet[3155]: E0420 19:55:04.200759 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:06.265607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942389501.mount: Deactivated successfully. Apr 20 19:55:06.701821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146025056.mount: Deactivated successfully. Apr 20 19:55:06.750162 containerd[1634]: time="2026-04-20T19:55:06.735950116Z" level=info msg="Container de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:55:06.976606 kubelet[3155]: E0420 19:55:06.976017 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:10.083417 kubelet[3155]: E0420 19:55:10.025632 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:10.125483 kubelet[3155]: E0420 19:55:10.098857 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.881s" Apr 20 19:55:12.415622 kubelet[3155]: E0420 19:55:12.415418 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.86s" Apr 20 19:55:12.441126 containerd[1634]: time="2026-04-20T19:55:12.440885891Z" level=error msg="failed to delete task" error="context deadline exceeded" id=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 Apr 20 19:55:12.569880 containerd[1634]: time="2026-04-20T19:55:12.548993281Z" level=error msg="Failed to handle backOff event container_id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" pid:3387 exit_status:1 exited_at:{seconds:1776714886 nanos:204773206} for 27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:55:12.591489 containerd[1634]: time="2026-04-20T19:55:12.585254416Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for name:\"kube-controller-manager\" attempt:1 returns container id \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\"" Apr 20 19:55:12.739040 containerd[1634]: time="2026-04-20T19:55:12.727216591Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 20 19:55:12.945884 containerd[1634]: time="2026-04-20T19:55:12.945410370Z" level=info msg="StartContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\"" Apr 20 19:55:14.157924 containerd[1634]: time="2026-04-20T19:55:14.157741229Z" level=info msg="connecting to shim de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" address="unix:///run/containerd/s/391cf9bf55e04de8e45b690b8088eed9410842836a0833df7de036e9b45471e5" protocol=ttrpc version=3 Apr 20 19:55:15.177665 containerd[1634]: time="2026-04-20T19:55:15.177287917Z" level=info msg="TaskExit event container_id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" id:\"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" pid:3387 exit_status:1 exited_at:{seconds:1776714886 nanos:204773206}" Apr 20 19:55:15.326595 kubelet[3155]: E0420 19:55:15.323224 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:15.895167 kubelet[3155]: E0420 19:55:15.892089 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.024s" Apr 20 19:55:16.764357 systemd[1]: Started cri-containerd-de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d.scope - libcontainer container de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d. Apr 20 19:55:19.969479 kubelet[3155]: E0420 19:55:19.967973 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.994s" Apr 20 19:55:22.085574 kubelet[3155]: E0420 19:55:21.944952 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:23.820267 kubelet[3155]: E0420 19:55:23.817762 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.755s" Apr 20 19:55:25.011550 containerd[1634]: time="2026-04-20T19:55:25.011350567Z" level=info msg="StartContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" returns successfully" Apr 20 19:55:26.955372 kubelet[3155]: E0420 19:55:26.954581 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.075s" Apr 20 19:55:27.247476 kubelet[3155]: E0420 19:55:27.236460 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:30.156993 kubelet[3155]: E0420 19:55:30.156855 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:30.248947 kubelet[3155]: I0420 19:55:30.248737 3155 scope.go:122] "RemoveContainer" containerID="27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" Apr 20 19:55:30.267662 kubelet[3155]: E0420 19:55:30.260014 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:31.177332 containerd[1634]: time="2026-04-20T19:55:31.176030738Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for container name:\"kube-scheduler\" attempt:1" Apr 20 19:55:32.455595 kubelet[3155]: E0420 19:55:32.449468 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:32.475273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125437676.mount: Deactivated successfully. Apr 20 19:55:32.651290 containerd[1634]: time="2026-04-20T19:55:32.586683891Z" level=info msg="Container 3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:55:32.863704 kubelet[3155]: E0420 19:55:32.855952 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:33.767962 containerd[1634]: time="2026-04-20T19:55:33.767499405Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for name:\"kube-scheduler\" attempt:1 returns container id \"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\"" Apr 20 19:55:33.806550 containerd[1634]: time="2026-04-20T19:55:33.805784094Z" level=info msg="StartContainer for \"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\"" Apr 20 19:55:33.997190 containerd[1634]: time="2026-04-20T19:55:33.987289020Z" level=info msg="connecting to shim 3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749" address="unix:///run/containerd/s/ed7d8c520f12ac1fb47a8aa71220272282162638ec4a368bb2c465689728ccc8" protocol=ttrpc version=3 Apr 20 19:55:34.669730 systemd[1]: Started cri-containerd-3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749.scope - libcontainer container 3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749. Apr 20 19:55:35.010442 kubelet[3155]: E0420 19:55:34.966998 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:36.043157 kubelet[3155]: E0420 19:55:36.026685 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.166s" Apr 20 19:55:36.201798 kubelet[3155]: E0420 19:55:36.199028 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:36.300459 kubelet[3155]: E0420 19:55:36.299641 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:37.478274 containerd[1634]: time="2026-04-20T19:55:37.471632923Z" level=info msg="StartContainer for \"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\" returns successfully" Apr 20 19:55:38.361398 kubelet[3155]: E0420 19:55:38.358550 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:39.925460 kubelet[3155]: E0420 19:55:39.922144 3155 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.061s" Apr 20 19:55:39.956478 kubelet[3155]: E0420 19:55:39.926558 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:40.062158 kubelet[3155]: E0420 19:55:40.061810 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:40.735741 systemd[1]: Reload requested from client PID 3610 ('systemctl') (unit session-6.scope)... Apr 20 19:55:40.735830 systemd[1]: Reloading... Apr 20 19:55:45.199473 systemd-ssh-generator[3665]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:55:45.270513 zram_generator::config[3669]: No configuration found. Apr 20 19:55:45.274541 (sd-exec-[3643]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:55:46.979641 kubelet[3155]: E0420 19:55:46.808983 3155 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:55:47.555360 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:55:49.251035 kubelet[3155]: E0420 19:55:49.250890 3155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:50.060840 systemd[1]: Reloading finished in 9321 ms. Apr 20 19:55:51.343827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:55:51.511298 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 19:55:51.514797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:55:51.517811 systemd[1]: kubelet.service: Consumed 7min 31.521s CPU time, 141.6M memory peak. Apr 20 19:55:51.686011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:55:56.488310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:55:56.785700 (kubelet)[3713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 19:56:05.586950 kubelet[3713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:56:07.623719 kubelet[3713]: I0420 19:56:07.623017 3713 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 19:56:07.623719 kubelet[3713]: I0420 19:56:07.623654 3713 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 19:56:07.623719 kubelet[3713]: I0420 19:56:07.623769 3713 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 19:56:07.623719 kubelet[3713]: I0420 19:56:07.623774 3713 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 19:56:07.632997 kubelet[3713]: I0420 19:56:07.628927 3713 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 19:56:07.802531 kubelet[3713]: I0420 19:56:07.802388 3713 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 19:56:08.365282 kubelet[3713]: I0420 19:56:08.358372 3713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 19:56:11.024717 kubelet[3713]: I0420 19:56:10.995007 3713 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 19:56:12.440743 kubelet[3713]: I0420 19:56:12.419782 3713 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 19:56:12.524179 kubelet[3713]: I0420 19:56:12.495697 3713 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 19:56:12.540117 kubelet[3713]: I0420 19:56:12.526615 3713 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 19:56:12.552252 kubelet[3713]: I0420 19:56:12.540452 3713 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 19:56:12.552252 kubelet[3713]: I0420 19:56:12.544741 3713 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 19:56:12.575619 kubelet[3713]: I0420 19:56:12.553786 3713 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 19:56:12.844149 kubelet[3713]: I0420 19:56:12.842914 3713 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 19:56:12.877182 kubelet[3713]: I0420 19:56:12.875968 3713 kubelet.go:482] "Attempting to sync node with API server" Apr 20 19:56:12.889152 kubelet[3713]: I0420 19:56:12.888582 3713 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 19:56:12.917299 kubelet[3713]: I0420 19:56:12.916618 3713 kubelet.go:394] "Adding apiserver pod source" Apr 20 19:56:12.958317 kubelet[3713]: I0420 19:56:12.943675 3713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 19:56:13.748477 kubelet[3713]: I0420 19:56:13.746582 3713 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 19:56:13.927886 kubelet[3713]: I0420 19:56:13.922199 3713 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 19:56:13.944125 kubelet[3713]: I0420 19:56:13.943241 3713 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 19:56:14.485877 kubelet[3713]: I0420 19:56:14.485615 3713 server.go:1257] "Started kubelet" Apr 20 19:56:14.609663 kubelet[3713]: I0420 19:56:14.605447 3713 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 19:56:14.631090 kubelet[3713]: I0420 19:56:14.625925 3713 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 19:56:14.632673 kubelet[3713]: I0420 19:56:14.632548 3713 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 19:56:14.807985 kubelet[3713]: I0420 19:56:14.806748 3713 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 19:56:15.029614 kubelet[3713]: I0420 19:56:15.029576 3713 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 19:56:15.042193 kubelet[3713]: I0420 19:56:15.041572 3713 server.go:317] "Adding debug handlers to kubelet server" Apr 20 19:56:15.066761 kubelet[3713]: I0420 19:56:15.062230 3713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 19:56:15.229169 kubelet[3713]: I0420 19:56:15.227490 3713 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 19:56:15.229169 kubelet[3713]: I0420 19:56:15.227676 3713 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 19:56:15.391235 kubelet[3713]: I0420 19:56:15.385948 3713 apiserver.go:52] "Watching apiserver" Apr 20 19:56:15.619208 kubelet[3713]: I0420 19:56:15.612993 3713 reconciler.go:29] "Reconciler: start to sync state" Apr 20 19:56:16.155190 kubelet[3713]: I0420 19:56:16.154969 3713 factory.go:223] Registration of the systemd container factory successfully Apr 20 19:56:16.162861 kubelet[3713]: I0420 19:56:16.158240 3713 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 19:56:16.622020 kubelet[3713]: I0420 19:56:16.621897 3713 factory.go:223] Registration of the containerd container factory successfully Apr 20 19:56:17.516282 kubelet[3713]: I0420 19:56:17.515604 3713 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 19:56:17.770168 kubelet[3713]: I0420 19:56:17.766708 3713 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 19:56:17.770168 kubelet[3713]: I0420 19:56:17.767743 3713 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 19:56:17.818612 kubelet[3713]: I0420 19:56:17.818547 3713 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 19:56:17.982120 kubelet[3713]: E0420 19:56:17.975984 3713 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:56:18.111248 kubelet[3713]: E0420 19:56:18.107339 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:18.365607 kubelet[3713]: E0420 19:56:18.358488 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:18.777867 kubelet[3713]: E0420 19:56:18.769696 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:19.585333 kubelet[3713]: E0420 19:56:19.580948 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:21.248383 kubelet[3713]: E0420 19:56:21.201516 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:24.589489 kubelet[3713]: E0420 19:56:24.464667 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:26.622689 kubelet[3713]: I0420 19:56:26.622592 3713 cpu_manager.go:225] "Starting" policy="none" Apr 20 19:56:26.622689 kubelet[3713]: I0420 19:56:26.622639 3713 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.622895 3713 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624028 3713 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624092 3713 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624187 3713 policy_none.go:50] "Start" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624214 3713 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624243 3713 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624591 3713 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 19:56:26.633667 kubelet[3713]: I0420 19:56:26.624598 3713 policy_none.go:44] "Start" Apr 20 19:56:29.830486 kubelet[3713]: E0420 19:56:29.829902 3713 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:56:30.528671 kubelet[3713]: E0420 19:56:30.525392 3713 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 19:56:30.861322 kubelet[3713]: I0420 19:56:30.854486 3713 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 19:56:31.410102 kubelet[3713]: I0420 19:56:30.873510 3713 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 19:56:31.810343 kubelet[3713]: I0420 19:56:31.737806 3713 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 19:56:32.048575 kubelet[3713]: E0420 19:56:32.046635 3713 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 19:56:35.199370 kubelet[3713]: I0420 19:56:35.196931 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:56:35.547618 kubelet[3713]: I0420 19:56:35.546556 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:56:35.741878 kubelet[3713]: I0420 19:56:35.733732 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0882dc0d06df2a57de7e97bbe10d0631-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0882dc0d06df2a57de7e97bbe10d0631\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:56:35.860218 kubelet[3713]: I0420 19:56:35.757270 3713 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 19:56:36.586300 kubelet[3713]: I0420 19:56:36.464543 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:36.628788 kubelet[3713]: I0420 19:56:36.628526 3713 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 19:56:36.826153 kubelet[3713]: I0420 19:56:36.824650 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:37.081713 kubelet[3713]: I0420 19:56:37.079784 3713 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:37.171140 kubelet[3713]: I0420 19:56:37.094880 3713 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 19:56:37.355492 kubelet[3713]: I0420 19:56:37.343829 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:37.582259 kubelet[3713]: I0420 19:56:37.511759 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:37.670798 kubelet[3713]: I0420 19:56:37.665669 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59dc6bef4fa0beb64c871485aab08cdf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"59dc6bef4fa0beb64c871485aab08cdf\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:39.127571 kubelet[3713]: I0420 19:56:39.123814 3713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c463bc49d886414af4d8b2e5922b9f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f8c463bc49d886414af4d8b2e5922b9f\") " pod="kube-system/kube-scheduler-localhost" Apr 20 19:56:41.873257 kubelet[3713]: E0420 19:56:41.851682 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:45.494369 kubelet[3713]: E0420 19:56:45.391177 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.45s" Apr 20 19:56:46.049198 kubelet[3713]: I0420 19:56:45.961826 3713 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 20 19:56:46.142943 kubelet[3713]: E0420 19:56:46.141678 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:46.188180 kubelet[3713]: I0420 19:56:46.185945 3713 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 19:56:46.524396 kubelet[3713]: E0420 19:56:46.496889 3713 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 20 19:56:46.904169 kubelet[3713]: E0420 19:56:46.901373 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:46.915116 kubelet[3713]: E0420 19:56:46.913428 3713 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 19:56:46.919875 kubelet[3713]: E0420 19:56:46.919732 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:48.344295 kubelet[3713]: E0420 19:56:48.330828 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.77s" Apr 20 19:56:52.024316 kubelet[3713]: E0420 19:56:52.018832 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:52.618379 kubelet[3713]: E0420 19:56:52.618002 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:54.189525 kubelet[3713]: E0420 19:56:54.189381 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.741s" Apr 20 19:56:54.475467 kubelet[3713]: E0420 19:56:54.465842 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:56.318891 kubelet[3713]: E0420 19:56:56.318450 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.122s" Apr 20 19:56:57.949785 kubelet[3713]: E0420 19:56:57.949727 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:58.990233 kubelet[3713]: E0420 19:56:58.982761 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.573s" Apr 20 19:57:05.472273 kubelet[3713]: E0420 19:57:05.466879 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.317s" Apr 20 19:57:07.251348 kubelet[3713]: E0420 19:57:07.250344 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.353s" Apr 20 19:57:17.950243 kubelet[3713]: E0420 19:57:17.949482 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.018s" Apr 20 19:57:20.898529 sudo[1813]: pam_unix(sudo:session): session closed for user root Apr 20 19:57:20.915106 sshd[1812]: Connection closed by 10.0.0.1 port 38042 Apr 20 19:57:20.924592 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Apr 20 19:57:21.093100 systemd[1]: sshd@4-12289-10.0.0.20:22-10.0.0.1:38042.service: Deactivated successfully. Apr 20 19:57:21.255932 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 19:57:21.269692 systemd[1]: session-6.scope: Consumed 3min 47.176s CPU time, 226.9M memory peak. Apr 20 19:57:21.362574 systemd-logind[1602]: Session 6 logged out. Waiting for processes to exit. Apr 20 19:57:21.523766 systemd-logind[1602]: Removed session 6. Apr 20 19:57:22.792359 kubelet[3713]: E0420 19:57:22.785807 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.573s" Apr 20 19:57:24.350540 systemd[1]: cri-containerd-3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749.scope: Deactivated successfully. Apr 20 19:57:24.390152 systemd[1]: cri-containerd-3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749.scope: Consumed 12.021s CPU time, 18.8M memory peak, 136K read from disk. Apr 20 19:57:24.798893 containerd[1634]: time="2026-04-20T19:57:24.731981540Z" level=info msg="received container exit event container_id:\"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\" id:\"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\" pid:3588 exit_status:1 exited_at:{seconds:1776715044 nanos:547350177}" Apr 20 19:57:34.828740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749-rootfs.mount: Deactivated successfully. Apr 20 19:57:35.455337 containerd[1634]: time="2026-04-20T19:57:35.439170769Z" level=error msg="failed to delete shim" error="close wait error: context deadline exceeded" id=3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749 Apr 20 19:57:37.351692 kubelet[3713]: E0420 19:57:37.346941 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.128s" Apr 20 19:57:43.148329 kubelet[3713]: E0420 19:57:43.133893 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:57:54.471198 kubelet[3713]: E0420 19:57:54.467017 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.32s" Apr 20 19:58:07.658413 kubelet[3713]: E0420 19:58:07.655683 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.109s" Apr 20 19:58:10.053816 kubelet[3713]: E0420 19:58:10.049677 3713 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:58:10.771870 systemd[1]: cri-containerd-de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d.scope: Deactivated successfully. Apr 20 19:58:10.789494 systemd[1]: cri-containerd-de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d.scope: Consumed 52.715s CPU time, 30.3M memory peak, 64K read from disk. Apr 20 19:58:12.092802 containerd[1634]: time="2026-04-20T19:58:11.928946068Z" level=info msg="received container exit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:58:14.602553 kubelet[3713]: E0420 19:58:14.600226 3713 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 19:58:22.323959 containerd[1634]: time="2026-04-20T19:58:22.322593337Z" level=error msg="failed to handle container TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" error="failed to stop container: context deadline exceeded" Apr 20 19:58:22.462753 containerd[1634]: time="2026-04-20T19:58:22.449985526Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 20 19:58:22.517506 containerd[1634]: time="2026-04-20T19:58:22.471696773Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 19:58:23.317484 kubelet[3713]: E0420 19:58:23.305746 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.638s" Apr 20 19:58:24.427953 containerd[1634]: time="2026-04-20T19:58:24.416687927Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:58:24.725251 kubelet[3713]: I0420 19:58:24.710861 3713 scope.go:122] "RemoveContainer" containerID="27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" Apr 20 19:58:25.742508 kubelet[3713]: E0420 19:58:25.667984 3713 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 20 19:58:25.808212 kubelet[3713]: I0420 19:58:25.751679 3713 scope.go:122] "RemoveContainer" containerID="27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" Apr 20 19:58:26.340601 kubelet[3713]: E0420 19:58:26.333924 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:58:27.143821 kubelet[3713]: I0420 19:58:27.139918 3713 scope.go:122] "RemoveContainer" containerID="3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749" Apr 20 19:58:27.797287 kubelet[3713]: E0420 19:58:27.781730 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:58:27.864500 containerd[1634]: time="2026-04-20T19:58:27.790725793Z" level=info msg="container event discarded" container=ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299 type=CONTAINER_STOPPED_EVENT Apr 20 19:58:28.281204 kubelet[3713]: E0420 19:58:28.249253 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:58:30.090822 kubelet[3713]: E0420 19:58:29.994697 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:58:30.910472 containerd[1634]: time="2026-04-20T19:58:30.856420472Z" level=info msg="RemoveContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\"" Apr 20 19:58:32.619158 containerd[1634]: time="2026-04-20T19:58:32.602607567Z" level=info msg="RemoveContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\"" Apr 20 19:58:32.782361 containerd[1634]: time="2026-04-20T19:58:32.715668070Z" level=error msg="RemoveContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" failed" error="rpc error: code = Unknown desc = failed to set removing state for container \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\": container is already in removing state" Apr 20 19:58:33.224683 kubelet[3713]: E0420 19:58:33.223853 3713 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\": container is already in removing state" containerID="27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" Apr 20 19:58:33.239331 kubelet[3713]: E0420 19:58:33.231951 3713 kuberuntime_gc.go:151] "Failed to remove container" err="rpc error: code = Unknown desc = failed to set removing state for container \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\": container is already in removing state" containerID="27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438" Apr 20 19:58:34.472900 containerd[1634]: time="2026-04-20T19:58:34.454757725Z" level=error msg="failed to delete task" error="context deadline exceeded" id=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d Apr 20 19:58:35.089638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d-rootfs.mount: Deactivated successfully. Apr 20 19:58:35.299741 containerd[1634]: time="2026-04-20T19:58:35.299101760Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:58:35.433024 containerd[1634]: time="2026-04-20T19:58:35.363001096Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 20 19:58:35.823540 kubelet[3713]: E0420 19:58:35.823162 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.835s" Apr 20 19:58:35.909418 kubelet[3713]: E0420 19:58:35.823510 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:58:36.635163 containerd[1634]: time="2026-04-20T19:58:36.624330077Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for container name:\"kube-scheduler\" attempt:2" Apr 20 19:58:36.952704 containerd[1634]: time="2026-04-20T19:58:36.943292899Z" level=info msg="RemoveContainer for \"27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438\" returns successfully" Apr 20 19:58:38.273410 containerd[1634]: time="2026-04-20T19:58:38.255462449Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:58:40.341898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196470622.mount: Deactivated successfully. Apr 20 19:58:41.642460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881455923.mount: Deactivated successfully. Apr 20 19:58:41.764491 containerd[1634]: time="2026-04-20T19:58:41.760855547Z" level=info msg="Container 66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:58:44.251326 kubelet[3713]: E0420 19:58:44.157815 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:58:48.353296 containerd[1634]: time="2026-04-20T19:58:48.246879315Z" level=error msg="failed to delete task" error="context deadline exceeded" id=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d Apr 20 19:58:48.380710 containerd[1634]: time="2026-04-20T19:58:48.364275050Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 20 19:58:48.547920 containerd[1634]: time="2026-04-20T19:58:48.537756508Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:58:49.438507 containerd[1634]: time="2026-04-20T19:58:49.432916788Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for name:\"kube-scheduler\" attempt:2 returns container id \"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\"" Apr 20 19:58:49.530570 kubelet[3713]: E0420 19:58:49.499684 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.282s" Apr 20 19:58:50.188481 kubelet[3713]: E0420 19:58:50.187392 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:58:50.524107 containerd[1634]: time="2026-04-20T19:58:50.477014172Z" level=info msg="StartContainer for \"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\"" Apr 20 19:58:52.554456 containerd[1634]: time="2026-04-20T19:58:52.487801450Z" level=info msg="connecting to shim 66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53" address="unix:///run/containerd/s/ed7d8c520f12ac1fb47a8aa71220272282162638ec4a368bb2c465689728ccc8" protocol=ttrpc version=3 Apr 20 19:58:53.667504 containerd[1634]: time="2026-04-20T19:58:53.652550830Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:58:57.727224 systemd[1]: Started cri-containerd-66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53.scope - libcontainer container 66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53. Apr 20 19:58:58.921798 kubelet[3713]: E0420 19:58:58.672467 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:03.869333 containerd[1634]: time="2026-04-20T19:59:03.552470428Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 19:59:04.049169 containerd[1634]: time="2026-04-20T19:59:04.023005684Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 20 19:59:04.051697 containerd[1634]: time="2026-04-20T19:59:04.026463834Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:59:06.434843 kubelet[3713]: E0420 19:59:06.434553 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:11.511530 kubelet[3713]: E0420 19:59:11.286425 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.71s" Apr 20 19:59:12.349253 containerd[1634]: time="2026-04-20T19:59:12.347719254Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:59:14.850538 kubelet[3713]: E0420 19:59:14.778876 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:22.879373 containerd[1634]: time="2026-04-20T19:59:22.867521751Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:59:22.947578 containerd[1634]: time="2026-04-20T19:59:22.945354545Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 19:59:23.451100 containerd[1634]: time="2026-04-20T19:59:23.338864228Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 20 19:59:24.868865 kubelet[3713]: E0420 19:59:24.862483 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:26.786172 kubelet[3713]: E0420 19:59:26.785011 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.941s" Apr 20 19:59:29.877411 kubelet[3713]: E0420 19:59:29.869561 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.06s" Apr 20 19:59:32.083594 kubelet[3713]: E0420 19:59:32.080468 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:32.130141 containerd[1634]: time="2026-04-20T19:59:32.126753109Z" level=info msg="StopContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" with timeout 30 (s)" Apr 20 19:59:32.927359 containerd[1634]: time="2026-04-20T19:59:32.919545961Z" level=info msg="Stop container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" with signal terminated" Apr 20 19:59:37.231244 containerd[1634]: time="2026-04-20T19:59:37.070673531Z" level=info msg="StartContainer for \"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" returns successfully" Apr 20 19:59:39.283126 containerd[1634]: time="2026-04-20T19:59:39.191901034Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 19:59:42.789315 containerd[1634]: time="2026-04-20T19:59:42.787758738Z" level=error msg="get state for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="context deadline exceeded" Apr 20 19:59:42.836292 containerd[1634]: time="2026-04-20T19:59:42.831596185Z" level=warning msg="unknown status" status=0 Apr 20 19:59:46.280122 kubelet[3713]: E0420 19:59:44.542890 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:59:46.440543 containerd[1634]: time="2026-04-20T19:59:46.252966285Z" level=error msg="get state for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="context deadline exceeded" Apr 20 19:59:46.540395 containerd[1634]: time="2026-04-20T19:59:46.499637148Z" level=warning msg="unknown status" status=0 Apr 20 19:59:47.833296 containerd[1634]: time="2026-04-20T19:59:47.771968254Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 20 19:59:48.131988 containerd[1634]: time="2026-04-20T19:59:48.090456265Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 20 19:59:49.464964 containerd[1634]: time="2026-04-20T19:59:49.443630886Z" level=error msg="failed to delete task" error="context deadline exceeded" id=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d Apr 20 19:59:49.801224 containerd[1634]: time="2026-04-20T19:59:49.675197402Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:59:49.849362 containerd[1634]: time="2026-04-20T19:59:49.682466506Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 20 20:00:00.222368 kubelet[3713]: E0420 19:59:58.921373 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:03.654175 kubelet[3713]: E0420 20:00:03.653673 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.691s" Apr 20 20:00:08.309390 kubelet[3713]: E0420 20:00:08.010231 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:09.378486 kubelet[3713]: E0420 20:00:09.377800 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:12.174020 containerd[1634]: time="2026-04-20T20:00:12.100959295Z" level=info msg="container event discarded" container=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d type=CONTAINER_CREATED_EVENT Apr 20 20:00:13.564257 kubelet[3713]: E0420 20:00:13.563105 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.57s" Apr 20 20:00:15.050338 kubelet[3713]: E0420 20:00:15.025819 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:15.204187 kubelet[3713]: E0420 20:00:15.079438 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:18.201964 containerd[1634]: time="2026-04-20T20:00:18.192750817Z" level=info msg="Kill container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\"" Apr 20 20:00:18.702253 kubelet[3713]: E0420 20:00:18.593849 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.979s" Apr 20 20:00:22.353216 containerd[1634]: time="2026-04-20T20:00:22.350723341Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 20:00:22.858190 kubelet[3713]: E0420 20:00:22.855475 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:23.988979 containerd[1634]: time="2026-04-20T20:00:23.963981655Z" level=info msg="container event discarded" container=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d type=CONTAINER_STARTED_EVENT Apr 20 20:00:24.263555 kubelet[3713]: E0420 20:00:24.259118 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:24.357367 kubelet[3713]: E0420 20:00:24.356665 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.399s" Apr 20 20:00:25.692334 containerd[1634]: time="2026-04-20T20:00:25.546781162Z" level=info msg="container event discarded" container=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 type=CONTAINER_STOPPED_EVENT Apr 20 20:00:28.669835 kubelet[3713]: E0420 20:00:28.656947 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.065s" Apr 20 20:00:30.625262 kubelet[3713]: E0420 20:00:30.606839 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:32.658515 containerd[1634]: time="2026-04-20T20:00:32.612641749Z" level=error msg="failed to delete task" error="context deadline exceeded" id=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d Apr 20 20:00:32.684003 containerd[1634]: time="2026-04-20T20:00:32.574759151Z" level=error msg="failed to drain init process de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 20:00:32.888467 containerd[1634]: time="2026-04-20T20:00:32.791809607Z" level=error msg="ttrpc: received message on inactive stream" stream=119 Apr 20 20:00:32.941971 containerd[1634]: time="2026-04-20T20:00:32.888002279Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:00:33.660835 containerd[1634]: time="2026-04-20T20:00:33.582781693Z" level=info msg="container event discarded" container=3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749 type=CONTAINER_CREATED_EVENT Apr 20 20:00:35.695290 kubelet[3713]: E0420 20:00:35.594554 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:37.458680 containerd[1634]: time="2026-04-20T20:00:37.307974787Z" level=info msg="container event discarded" container=3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749 type=CONTAINER_STARTED_EVENT Apr 20 20:00:38.369080 kubelet[3713]: E0420 20:00:38.362874 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:41.086335 kubelet[3713]: E0420 20:00:40.929015 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.561s" Apr 20 20:00:43.552923 kubelet[3713]: E0420 20:00:43.488451 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:43.663913 kubelet[3713]: E0420 20:00:43.556962 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.448s" Apr 20 20:00:44.466374 kubelet[3713]: E0420 20:00:44.454847 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:47.161947 kubelet[3713]: E0420 20:00:47.160738 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.471s" Apr 20 20:00:47.858763 kubelet[3713]: E0420 20:00:47.856561 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:00:49.744622 kubelet[3713]: E0420 20:00:49.742567 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.459s" Apr 20 20:00:50.264966 kubelet[3713]: E0420 20:00:50.161029 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:50.841220 kubelet[3713]: E0420 20:00:50.838856 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Apr 20 20:00:53.214195 kubelet[3713]: E0420 20:00:53.212583 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.381s" Apr 20 20:00:55.491155 kubelet[3713]: E0420 20:00:55.483701 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.635s" Apr 20 20:00:56.383209 kubelet[3713]: E0420 20:00:56.379258 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:00:59.235628 kubelet[3713]: E0420 20:00:59.144947 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.228s" Apr 20 20:01:02.465993 kubelet[3713]: E0420 20:01:02.457692 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:02.878094 kubelet[3713]: E0420 20:01:02.877400 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.528s" Apr 20 20:01:05.924753 kubelet[3713]: E0420 20:01:05.919841 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.989s" Apr 20 20:01:08.062151 kubelet[3713]: E0420 20:01:07.991954 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.011s" Apr 20 20:01:08.772931 kubelet[3713]: E0420 20:01:08.752976 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:11.308654 kubelet[3713]: E0420 20:01:11.308016 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.185s" Apr 20 20:01:13.503291 kubelet[3713]: E0420 20:01:13.501869 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.167s" Apr 20 20:01:14.351503 kubelet[3713]: E0420 20:01:14.260913 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:14.544426 kubelet[3713]: E0420 20:01:14.538676 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.014s" Apr 20 20:01:16.399840 kubelet[3713]: E0420 20:01:16.399299 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.845s" Apr 20 20:01:18.265748 kubelet[3713]: E0420 20:01:18.259881 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.699s" Apr 20 20:01:22.980406 kubelet[3713]: E0420 20:01:22.855494 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:24.147145 kubelet[3713]: E0420 20:01:24.142635 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.816s" Apr 20 20:01:25.582141 kubelet[3713]: E0420 20:01:25.581365 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.436s" Apr 20 20:01:28.131975 kubelet[3713]: E0420 20:01:28.047201 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:01:29.651472 kubelet[3713]: E0420 20:01:29.567705 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:32.108829 kubelet[3713]: E0420 20:01:32.107619 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.25s" Apr 20 20:01:35.230813 kubelet[3713]: E0420 20:01:35.230124 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.967s" Apr 20 20:01:36.420314 kubelet[3713]: E0420 20:01:36.411583 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:37.428403 containerd[1634]: time="2026-04-20T20:01:37.360819703Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 20:01:42.711778 kubelet[3713]: E0420 20:01:42.684806 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:42.738603 kubelet[3713]: E0420 20:01:42.736531 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.338s" Apr 20 20:01:44.032168 kubelet[3713]: E0420 20:01:44.020778 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.25s" Apr 20 20:01:47.059478 kubelet[3713]: E0420 20:01:47.056007 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.023s" Apr 20 20:01:47.347732 containerd[1634]: time="2026-04-20T20:01:47.327985880Z" level=error msg="failed to delete task" error="context deadline exceeded" id=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d Apr 20 20:01:47.488004 containerd[1634]: time="2026-04-20T20:01:47.467875554Z" level=error msg="ttrpc: received message on inactive stream" stream=141 Apr 20 20:01:47.573611 containerd[1634]: time="2026-04-20T20:01:47.536490456Z" level=error msg="Failed to handle backOff event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112} for de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:01:50.431774 kubelet[3713]: E0420 20:01:50.141966 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:53.361786 kubelet[3713]: E0420 20:01:53.357536 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.039s" Apr 20 20:01:56.738193 kubelet[3713]: E0420 20:01:56.734447 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:01:57.940089 kubelet[3713]: E0420 20:01:57.933424 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.981s" Apr 20 20:02:00.973264 kubelet[3713]: E0420 20:02:00.972540 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.134s" Apr 20 20:02:01.987005 kubelet[3713]: E0420 20:02:01.985942 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:02.118009 kubelet[3713]: E0420 20:02:02.117474 3713 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" Apr 20 20:02:02.140336 kubelet[3713]: E0420 20:02:02.126220 3713 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="59dc6bef4fa0beb64c871485aab08cdf" containerName="kube-controller-manager" containerID="containerd://de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" gracePeriod=30 Apr 20 20:02:02.149652 kubelet[3713]: E0420 20:02:02.140945 3713 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d"} pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:02.149652 kubelet[3713]: E0420 20:02:02.147429 3713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="59dc6bef4fa0beb64c871485aab08cdf" Apr 20 20:02:02.303217 containerd[1634]: time="2026-04-20T20:02:02.187971146Z" level=error msg="StopContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" to be killed: wait container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\": context canceled" Apr 20 20:02:02.900608 kubelet[3713]: E0420 20:02:02.895310 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.06s" Apr 20 20:02:04.001137 containerd[1634]: time="2026-04-20T20:02:03.998857396Z" level=info msg="StopContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" with timeout 30 (s)" Apr 20 20:02:04.092376 containerd[1634]: time="2026-04-20T20:02:04.091421709Z" level=info msg="Skipping the sending of signal terminated to container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:02:05.919142 kubelet[3713]: E0420 20:02:05.918389 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:07.056337 kubelet[3713]: E0420 20:02:07.040591 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:12.199469 kubelet[3713]: E0420 20:02:12.191996 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:15.190928 kubelet[3713]: E0420 20:02:15.190854 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.289s" Apr 20 20:02:18.390972 kubelet[3713]: E0420 20:02:18.292742 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:20.949882 kubelet[3713]: E0420 20:02:20.949516 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.117s" Apr 20 20:02:22.710871 kubelet[3713]: E0420 20:02:22.709696 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.647s" Apr 20 20:02:23.446851 kubelet[3713]: E0420 20:02:23.446245 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:26.940654 kubelet[3713]: E0420 20:02:26.939896 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.055s" Apr 20 20:02:28.618157 kubelet[3713]: E0420 20:02:28.610817 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:31.695486 kubelet[3713]: E0420 20:02:31.688016 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.851s" Apr 20 20:02:33.245233 kubelet[3713]: E0420 20:02:33.244839 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.396s" Apr 20 20:02:34.047264 kubelet[3713]: E0420 20:02:34.043885 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:34.222839 containerd[1634]: time="2026-04-20T20:02:34.216206086Z" level=info msg="Kill container \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\"" Apr 20 20:02:35.317239 kubelet[3713]: E0420 20:02:35.317008 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.495s" Apr 20 20:02:36.211336 containerd[1634]: time="2026-04-20T20:02:36.192145889Z" level=info msg="container event discarded" container=3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749 type=CONTAINER_STOPPED_EVENT Apr 20 20:02:41.064219 kubelet[3713]: E0420 20:02:40.981408 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:43.464280 kubelet[3713]: E0420 20:02:43.459347 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.541s" Apr 20 20:02:45.249256 kubelet[3713]: E0420 20:02:45.241751 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.763s" Apr 20 20:02:45.750203 kubelet[3713]: E0420 20:02:45.686962 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:48.823142 kubelet[3713]: E0420 20:02:48.693448 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:53.696331 kubelet[3713]: E0420 20:02:53.695862 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.324s" Apr 20 20:02:54.323242 kubelet[3713]: E0420 20:02:54.317458 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:02:55.764000 kubelet[3713]: E0420 20:02:55.763638 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.11s" Apr 20 20:02:59.677333 kubelet[3713]: E0420 20:02:59.663655 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.82s" Apr 20 20:02:59.740555 kubelet[3713]: E0420 20:02:59.740201 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:00.949447 kubelet[3713]: E0420 20:03:00.941231 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.082s" Apr 20 20:03:03.627933 kubelet[3713]: E0420 20:03:03.626904 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.769s" Apr 20 20:03:06.101161 kubelet[3713]: E0420 20:03:06.039681 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:08.391131 kubelet[3713]: E0420 20:03:08.387480 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.521s" Apr 20 20:03:11.147950 kubelet[3713]: E0420 20:03:11.122708 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.638s" Apr 20 20:03:11.898209 kubelet[3713]: E0420 20:03:11.892892 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:13.326237 kubelet[3713]: E0420 20:03:13.325756 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.171s" Apr 20 20:03:14.629117 kubelet[3713]: E0420 20:03:14.627813 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.059s" Apr 20 20:03:15.997323 kubelet[3713]: E0420 20:03:15.996397 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.367s" Apr 20 20:03:17.333856 kubelet[3713]: E0420 20:03:17.328958 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:19.155636 kubelet[3713]: E0420 20:03:19.151309 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.087s" Apr 20 20:03:24.618267 kubelet[3713]: E0420 20:03:24.543589 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:28.724455 kubelet[3713]: E0420 20:03:28.696862 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.434s" Apr 20 20:03:31.152800 kubelet[3713]: E0420 20:03:31.149915 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:32.660718 kubelet[3713]: E0420 20:03:32.660125 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.868s" Apr 20 20:03:34.056003 kubelet[3713]: E0420 20:03:34.055784 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.374s" Apr 20 20:03:35.529751 kubelet[3713]: E0420 20:03:35.527570 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.454s" Apr 20 20:03:35.742999 kubelet[3713]: E0420 20:03:35.739124 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:36.492863 kubelet[3713]: E0420 20:03:36.486790 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:36.961152 containerd[1634]: time="2026-04-20T20:03:36.942876938Z" level=info msg="container event discarded" container=27da244ed86a7616584bef355bccbc800d552e89767d09369d698f192a884438 type=CONTAINER_DELETED_EVENT Apr 20 20:03:40.158374 kubelet[3713]: E0420 20:03:40.149607 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.275s" Apr 20 20:03:41.914418 kubelet[3713]: E0420 20:03:41.893986 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:42.929605 kubelet[3713]: E0420 20:03:42.926009 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.766s" Apr 20 20:03:44.554209 kubelet[3713]: E0420 20:03:44.549830 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.576s" Apr 20 20:03:46.159193 kubelet[3713]: E0420 20:03:46.153870 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.597s" Apr 20 20:03:47.617414 kubelet[3713]: E0420 20:03:47.610285 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:47.756865 kubelet[3713]: E0420 20:03:47.753872 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.592s" Apr 20 20:03:48.850396 containerd[1634]: time="2026-04-20T20:03:48.842871236Z" level=info msg="container event discarded" container=66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53 type=CONTAINER_CREATED_EVENT Apr 20 20:03:49.050220 kubelet[3713]: E0420 20:03:49.047440 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.154s" Apr 20 20:03:52.859731 kubelet[3713]: E0420 20:03:52.831710 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.957s" Apr 20 20:03:53.446282 kubelet[3713]: E0420 20:03:53.443922 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:03:55.379554 kubelet[3713]: E0420 20:03:55.370516 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.417s" Apr 20 20:03:56.320882 containerd[1634]: time="2026-04-20T20:03:56.316270847Z" level=info msg="TaskExit event container_id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" id:\"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" pid:3541 exit_status:1 exited_at:{seconds:1776715091 nanos:272469112}" Apr 20 20:03:59.335162 kubelet[3713]: E0420 20:03:59.331025 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.886s" Apr 20 20:03:59.805887 kubelet[3713]: E0420 20:03:59.800399 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:05.294586 kubelet[3713]: E0420 20:04:05.293334 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:05.568315 kubelet[3713]: E0420 20:04:05.562363 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.203s" Apr 20 20:04:05.957801 containerd[1634]: time="2026-04-20T20:04:05.954681001Z" level=info msg="StopContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" returns successfully" Apr 20 20:04:06.076322 kubelet[3713]: E0420 20:04:06.075347 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:06.103406 kubelet[3713]: E0420 20:04:06.102855 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:07.937141 kubelet[3713]: E0420 20:04:07.934456 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.362s" Apr 20 20:04:09.279192 containerd[1634]: time="2026-04-20T20:04:09.125850194Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 20:04:11.574562 kubelet[3713]: E0420 20:04:11.562932 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.504s" Apr 20 20:04:11.757513 kubelet[3713]: E0420 20:04:11.754635 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:11.800983 kubelet[3713]: I0420 20:04:11.800926 3713 scope.go:122] "RemoveContainer" containerID="ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299" Apr 20 20:04:13.524951 containerd[1634]: time="2026-04-20T20:04:13.523298563Z" level=info msg="RemoveContainer for \"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\"" Apr 20 20:04:13.752948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492006604.mount: Deactivated successfully. Apr 20 20:04:14.137610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026303954.mount: Deactivated successfully. Apr 20 20:04:14.221603 containerd[1634]: time="2026-04-20T20:04:14.219971221Z" level=info msg="Container 850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:04:15.534472 kubelet[3713]: E0420 20:04:15.532878 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.688s" Apr 20 20:04:17.117354 containerd[1634]: time="2026-04-20T20:04:17.113657627Z" level=info msg="RemoveContainer for \"ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299\" returns successfully" Apr 20 20:04:17.514515 containerd[1634]: time="2026-04-20T20:04:17.509720556Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for name:\"kube-controller-manager\" attempt:2 returns container id \"850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9\"" Apr 20 20:04:17.677445 kubelet[3713]: E0420 20:04:17.654581 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:19.251609 kubelet[3713]: E0420 20:04:19.250732 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.715s" Apr 20 20:04:19.852939 containerd[1634]: time="2026-04-20T20:04:19.846533424Z" level=info msg="StartContainer for \"850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9\"" Apr 20 20:04:23.060577 kubelet[3713]: E0420 20:04:23.059280 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:23.084816 containerd[1634]: time="2026-04-20T20:04:23.081178045Z" level=info msg="connecting to shim 850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9" address="unix:///run/containerd/s/391cf9bf55e04de8e45b690b8088eed9410842836a0833df7de036e9b45471e5" protocol=ttrpc version=3 Apr 20 20:04:24.250912 kubelet[3713]: E0420 20:04:24.250438 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.718s" Apr 20 20:04:27.486280 kubelet[3713]: E0420 20:04:27.395708 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.031s" Apr 20 20:04:29.755532 kubelet[3713]: E0420 20:04:29.649684 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:31.442145 systemd[1]: Started cri-containerd-850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9.scope - libcontainer container 850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9. Apr 20 20:04:31.771138 containerd[1634]: time="2026-04-20T20:04:31.766232447Z" level=info msg="container event discarded" container=66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53 type=CONTAINER_STARTED_EVENT Apr 20 20:04:39.622974 kubelet[3713]: E0420 20:04:39.425490 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:47.800564 containerd[1634]: time="2026-04-20T20:04:47.795964938Z" level=info msg="StartContainer for \"850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9\" returns successfully" Apr 20 20:04:48.250205 kubelet[3713]: E0420 20:04:48.076993 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:49.554565 kubelet[3713]: E0420 20:04:49.384020 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.773s" Apr 20 20:04:52.371937 kubelet[3713]: E0420 20:04:52.320777 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:55.167812 kubelet[3713]: E0420 20:04:54.981739 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:04:55.553557 kubelet[3713]: E0420 20:04:55.542989 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.279s" Apr 20 20:05:03.664922 kubelet[3713]: E0420 20:05:03.560406 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:04.730580 kubelet[3713]: E0420 20:05:04.651320 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.948s" Apr 20 20:05:07.244204 kubelet[3713]: E0420 20:05:07.239038 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:08.059858 systemd[1]: cri-containerd-66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53.scope: Deactivated successfully. Apr 20 20:05:08.082572 systemd[1]: cri-containerd-66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53.scope: Consumed 1min 21.050s CPU time, 23.9M memory peak. Apr 20 20:05:08.341663 containerd[1634]: time="2026-04-20T20:05:08.283757238Z" level=info msg="received container exit event container_id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" pid:3848 exit_status:1 exited_at:{seconds:1776715508 nanos:236641651}" Apr 20 20:05:10.498631 kubelet[3713]: E0420 20:05:10.427025 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:17.393347 kubelet[3713]: E0420 20:05:17.381335 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:19.637254 containerd[1634]: time="2026-04-20T20:05:19.556665271Z" level=error msg="failed to drain init process 66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 20:05:19.891712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53-rootfs.mount: Deactivated successfully. Apr 20 20:05:20.224203 containerd[1634]: time="2026-04-20T20:05:20.210711329Z" level=error msg="failed to delete task" error="rpc error: code = Unknown desc = failed to delete task: context deadline exceeded: " id=66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53 Apr 20 20:05:20.562341 containerd[1634]: time="2026-04-20T20:05:20.561476277Z" level=error msg="failed to handle container TaskExit event container_id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" pid:3848 exit_status:1 exited_at:{seconds:1776715508 nanos:236641651}" error="failed to stop container: failed to delete task: failed to delete task: context deadline exceeded: " Apr 20 20:05:21.452328 kubelet[3713]: E0420 20:05:21.450764 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.591s" Apr 20 20:05:21.938447 kubelet[3713]: E0420 20:05:21.935306 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:22.248393 containerd[1634]: time="2026-04-20T20:05:22.247252768Z" level=info msg="TaskExit event container_id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" id:\"66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53\" pid:3848 exit_status:1 exited_at:{seconds:1776715508 nanos:236641651}" Apr 20 20:05:22.273794 kubelet[3713]: E0420 20:05:22.259525 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:22.785773 kubelet[3713]: E0420 20:05:22.785028 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:23.606993 kubelet[3713]: E0420 20:05:23.606733 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:30.514869 kubelet[3713]: E0420 20:05:30.508676 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:32.190509 kubelet[3713]: E0420 20:05:32.181658 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.648s" Apr 20 20:05:33.986467 kubelet[3713]: E0420 20:05:33.969925 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.642s" Apr 20 20:05:35.088801 kubelet[3713]: I0420 20:05:35.088405 3713 scope.go:122] "RemoveContainer" containerID="3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749" Apr 20 20:05:35.186179 kubelet[3713]: E0420 20:05:35.185464 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:35.241937 kubelet[3713]: I0420 20:05:35.239374 3713 scope.go:122] "RemoveContainer" containerID="66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53" Apr 20 20:05:35.283363 kubelet[3713]: E0420 20:05:35.264247 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:36.019566 kubelet[3713]: E0420 20:05:36.016616 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:36.085762 containerd[1634]: time="2026-04-20T20:05:36.084181102Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for container name:\"kube-scheduler\" attempt:3" Apr 20 20:05:37.294412 containerd[1634]: time="2026-04-20T20:05:37.247897552Z" level=info msg="RemoveContainer for \"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\"" Apr 20 20:05:38.081408 containerd[1634]: time="2026-04-20T20:05:38.081195680Z" level=info msg="RemoveContainer for \"3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749\" returns successfully" Apr 20 20:05:38.307471 kubelet[3713]: E0420 20:05:38.253740 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:38.573499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394487072.mount: Deactivated successfully. Apr 20 20:05:38.695685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439513228.mount: Deactivated successfully. Apr 20 20:05:38.786795 containerd[1634]: time="2026-04-20T20:05:38.785768418Z" level=info msg="Container ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:38.957595 kubelet[3713]: E0420 20:05:38.954849 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.074s" Apr 20 20:05:41.420319 containerd[1634]: time="2026-04-20T20:05:41.419642480Z" level=info msg="CreateContainer within sandbox \"0fa31d4d5e50ddeaf2ea96f37d625c207b2977ad5d09cef43e3a6fded0e4c214\" for name:\"kube-scheduler\" attempt:3 returns container id \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\"" Apr 20 20:05:41.450356 kubelet[3713]: E0420 20:05:41.448581 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:41.542449 kubelet[3713]: E0420 20:05:41.541530 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.56s" Apr 20 20:05:41.558352 containerd[1634]: time="2026-04-20T20:05:41.556909778Z" level=info msg="StartContainer for \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\"" Apr 20 20:05:42.604411 containerd[1634]: time="2026-04-20T20:05:42.588993911Z" level=info msg="connecting to shim ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" address="unix:///run/containerd/s/ed7d8c520f12ac1fb47a8aa71220272282162638ec4a368bb2c465689728ccc8" protocol=ttrpc version=3 Apr 20 20:05:42.984081 kubelet[3713]: E0420 20:05:42.982745 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.16s" Apr 20 20:05:43.432411 systemd[1]: Started cri-containerd-ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09.scope - libcontainer container ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09. Apr 20 20:05:46.962538 kubelet[3713]: E0420 20:05:46.959723 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:47.333945 containerd[1634]: time="2026-04-20T20:05:47.333552112Z" level=info msg="StartContainer for \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" returns successfully" Apr 20 20:05:48.740656 kubelet[3713]: E0420 20:05:48.740559 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:50.589376 kubelet[3713]: E0420 20:05:50.584592 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:51.795137 kubelet[3713]: E0420 20:05:51.794463 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:52.282166 kubelet[3713]: E0420 20:05:52.193852 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:53.424297 kubelet[3713]: E0420 20:05:53.414978 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:55.371268 kubelet[3713]: E0420 20:05:55.366176 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.532s" Apr 20 20:05:58.076405 kubelet[3713]: E0420 20:05:58.073504 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:05:58.201224 kubelet[3713]: E0420 20:05:58.092858 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.203s" Apr 20 20:06:02.868467 kubelet[3713]: E0420 20:06:02.865300 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:03.857824 kubelet[3713]: E0420 20:06:03.831834 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:04.837014 kubelet[3713]: E0420 20:06:04.835865 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:09.141494 kubelet[3713]: E0420 20:06:09.141080 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:14.624164 kubelet[3713]: E0420 20:06:14.622768 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:15.461310 kubelet[3713]: E0420 20:06:15.451824 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.617s" Apr 20 20:06:17.523035 kubelet[3713]: E0420 20:06:17.517589 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.586s" Apr 20 20:06:19.031779 kubelet[3713]: E0420 20:06:19.030557 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.196s" Apr 20 20:06:19.973241 kubelet[3713]: E0420 20:06:19.965547 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:22.170411 kubelet[3713]: E0420 20:06:22.168556 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.307s" Apr 20 20:06:23.972330 kubelet[3713]: E0420 20:06:23.962181 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.772s" Apr 20 20:06:25.834483 kubelet[3713]: E0420 20:06:25.830828 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:26.999642 kubelet[3713]: E0420 20:06:26.999496 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.005s" Apr 20 20:06:28.285103 kubelet[3713]: E0420 20:06:28.261943 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.24s" Apr 20 20:06:29.483112 kubelet[3713]: E0420 20:06:29.482697 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.159s" Apr 20 20:06:29.796190 kubelet[3713]: I0420 20:06:29.792870 3713 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 20:06:30.297117 containerd[1634]: time="2026-04-20T20:06:30.217917707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 20:06:30.508362 kubelet[3713]: I0420 20:06:30.507915 3713 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 20:06:31.102791 kubelet[3713]: E0420 20:06:31.088915 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:31.274434 kubelet[3713]: E0420 20:06:31.271428 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.424s" Apr 20 20:06:32.319212 systemd[1]: cri-containerd-850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9.scope: Deactivated successfully. Apr 20 20:06:32.320880 systemd[1]: cri-containerd-850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9.scope: Consumed 54.971s CPU time, 42.4M memory peak. Apr 20 20:06:32.825022 containerd[1634]: time="2026-04-20T20:06:32.822623189Z" level=info msg="received container exit event container_id:\"850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9\" id:\"850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9\" pid:3970 exit_status:1 exited_at:{seconds:1776715592 nanos:479954837}" Apr 20 20:06:38.935448 kubelet[3713]: E0420 20:06:38.816830 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:39.607596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9-rootfs.mount: Deactivated successfully. Apr 20 20:06:39.984516 kubelet[3713]: E0420 20:06:39.981889 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.876s" Apr 20 20:06:42.475121 kubelet[3713]: E0420 20:06:42.473923 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.491s" Apr 20 20:06:42.685351 kubelet[3713]: I0420 20:06:42.684749 3713 scope.go:122] "RemoveContainer" containerID="de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d" Apr 20 20:06:42.776297 kubelet[3713]: I0420 20:06:42.770728 3713 scope.go:122] "RemoveContainer" containerID="850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9" Apr 20 20:06:42.784256 kubelet[3713]: E0420 20:06:42.783024 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:43.158948 containerd[1634]: time="2026-04-20T20:06:43.157772632Z" level=info msg="RemoveContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\"" Apr 20 20:06:43.352108 containerd[1634]: time="2026-04-20T20:06:43.351621152Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 20:06:43.673031 containerd[1634]: time="2026-04-20T20:06:43.672651069Z" level=info msg="RemoveContainer for \"de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d\" returns successfully" Apr 20 20:06:44.505594 kubelet[3713]: E0420 20:06:44.493940 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:44.956739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012610552.mount: Deactivated successfully. Apr 20 20:06:44.997713 containerd[1634]: time="2026-04-20T20:06:44.994998707Z" level=info msg="Container df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:06:45.045204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999198587.mount: Deactivated successfully. Apr 20 20:06:45.218420 kubelet[3713]: E0420 20:06:45.209654 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.374s" Apr 20 20:06:46.296099 containerd[1634]: time="2026-04-20T20:06:46.295400340Z" level=info msg="CreateContainer within sandbox \"023f1d178593fff81f33c2329dfd5ed7664d4a93fa42bff6207725fae955d5ce\" for name:\"kube-controller-manager\" attempt:3 returns container id \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\"" Apr 20 20:06:46.750779 containerd[1634]: time="2026-04-20T20:06:46.700408814Z" level=info msg="StartContainer for \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\"" Apr 20 20:06:47.362179 kubelet[3713]: E0420 20:06:47.349631 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.518s" Apr 20 20:06:47.500247 containerd[1634]: time="2026-04-20T20:06:47.498735021Z" level=info msg="connecting to shim df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" address="unix:///run/containerd/s/391cf9bf55e04de8e45b690b8088eed9410842836a0833df7de036e9b45471e5" protocol=ttrpc version=3 Apr 20 20:06:48.430420 systemd[1]: Started cri-containerd-df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7.scope - libcontainer container df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7. Apr 20 20:06:51.045156 containerd[1634]: time="2026-04-20T20:06:50.902470765Z" level=error msg="get state for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="context deadline exceeded" Apr 20 20:06:51.045156 containerd[1634]: time="2026-04-20T20:06:51.043305403Z" level=warning msg="unknown status" status=0 Apr 20 20:06:51.182149 kubelet[3713]: E0420 20:06:50.868811 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:53.583254 containerd[1634]: time="2026-04-20T20:06:53.574922633Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:06:53.985558 kubelet[3713]: E0420 20:06:53.979838 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.096s" Apr 20 20:06:55.177359 kubelet[3713]: E0420 20:06:55.175864 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.192s" Apr 20 20:06:55.661357 kubelet[3713]: E0420 20:06:55.657722 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:57.941358 kubelet[3713]: E0420 20:06:57.768806 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:06:58.332169 containerd[1634]: time="2026-04-20T20:06:58.325653275Z" level=info msg="StartContainer for \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" returns successfully" Apr 20 20:07:02.764299 kubelet[3713]: E0420 20:07:02.763997 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.656s" Apr 20 20:07:03.771253 kubelet[3713]: E0420 20:07:03.763433 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:07.122188 kubelet[3713]: E0420 20:07:07.096593 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.099s" Apr 20 20:07:07.901476 kubelet[3713]: E0420 20:07:07.899735 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:08.385153 kubelet[3713]: E0420 20:07:08.376924 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.202s" Apr 20 20:07:09.626506 kubelet[3713]: E0420 20:07:09.624859 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:10.269201 kubelet[3713]: E0420 20:07:10.267841 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:11.121352 kubelet[3713]: E0420 20:07:11.121262 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.674s" Apr 20 20:07:15.268261 kubelet[3713]: E0420 20:07:15.257023 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.112s" Apr 20 20:07:15.315718 kubelet[3713]: E0420 20:07:15.283594 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:16.677188 kubelet[3713]: E0420 20:07:16.390020 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.113s" Apr 20 20:07:17.449770 kubelet[3713]: E0420 20:07:17.449721 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:20.310754 kubelet[3713]: E0420 20:07:20.310074 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:25.349097 kubelet[3713]: E0420 20:07:25.348456 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:29.129397 kubelet[3713]: E0420 20:07:29.084004 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.263s" Apr 20 20:07:29.183152 kubelet[3713]: E0420 20:07:29.179544 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:30.482821 kubelet[3713]: E0420 20:07:30.481757 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:34.050023 kubelet[3713]: E0420 20:07:34.044017 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:35.810256 kubelet[3713]: E0420 20:07:35.783035 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:41.048128 kubelet[3713]: E0420 20:07:40.992413 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:44.202182 kubelet[3713]: E0420 20:07:44.183852 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.329s" Apr 20 20:07:46.987430 kubelet[3713]: E0420 20:07:46.980851 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:46.987430 kubelet[3713]: E0420 20:07:46.981213 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.492s" Apr 20 20:07:50.356304 kubelet[3713]: E0420 20:07:50.351919 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.369s" Apr 20 20:07:53.967349 kubelet[3713]: E0420 20:07:53.913617 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:07:56.453573 kubelet[3713]: E0420 20:07:56.449889 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.093s" Apr 20 20:07:58.143484 systemd[1]: cri-containerd-df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7.scope: Deactivated successfully. Apr 20 20:07:58.169996 systemd[1]: cri-containerd-df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7.scope: Consumed 28.050s CPU time, 20.1M memory peak. Apr 20 20:07:58.675703 containerd[1634]: time="2026-04-20T20:07:58.665608272Z" level=info msg="received container exit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:08:02.656294 kubelet[3713]: E0420 20:08:02.651772 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:03.187530 systemd[1]: cri-containerd-ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09.scope: Deactivated successfully. Apr 20 20:08:03.240813 systemd[1]: cri-containerd-ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09.scope: Consumed 33.288s CPU time, 20.7M memory peak. Apr 20 20:08:06.935019 containerd[1634]: time="2026-04-20T20:08:06.662849737Z" level=info msg="received container exit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:08:11.016152 containerd[1634]: time="2026-04-20T20:08:10.775837651Z" level=error msg="failed to handle container TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" error="failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Apr 20 20:08:11.421366 kubelet[3713]: E0420 20:08:11.417467 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:12.202015 containerd[1634]: time="2026-04-20T20:08:12.179685182Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:08:17.986786 containerd[1634]: time="2026-04-20T20:08:17.597847512Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 20 20:08:18.102347 containerd[1634]: time="2026-04-20T20:08:17.689905570Z" level=error msg="failed to handle container TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" error="failed to stop container: context deadline exceeded" Apr 20 20:08:18.342723 containerd[1634]: time="2026-04-20T20:08:18.095762031Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 20 20:08:19.627815 kubelet[3713]: E0420 20:08:19.626789 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.07s" Apr 20 20:08:19.777257 kubelet[3713]: E0420 20:08:19.612677 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:22.503169 containerd[1634]: time="2026-04-20T20:08:22.368841151Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:08:22.627237 containerd[1634]: time="2026-04-20T20:08:22.495354035Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 20 20:08:22.648177 containerd[1634]: time="2026-04-20T20:08:22.627798567Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:08:22.660830 containerd[1634]: time="2026-04-20T20:08:22.656619280Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 20 20:08:26.480298 kubelet[3713]: E0420 20:08:26.425816 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:30.344700 kubelet[3713]: E0420 20:08:30.289499 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.434s" Apr 20 20:08:32.522508 kubelet[3713]: E0420 20:08:32.518396 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:32.639182 containerd[1634]: time="2026-04-20T20:08:32.637297765Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 Apr 20 20:08:32.865418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09-rootfs.mount: Deactivated successfully. Apr 20 20:08:33.254664 containerd[1634]: time="2026-04-20T20:08:32.888460726Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:08:33.358897 containerd[1634]: time="2026-04-20T20:08:33.035622116Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 20:08:33.378208 containerd[1634]: time="2026-04-20T20:08:33.375547866Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:08:34.832420 kubelet[3713]: E0420 20:08:34.785728 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.286s" Apr 20 20:08:39.368876 kubelet[3713]: E0420 20:08:39.351504 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:43.766296 containerd[1634]: time="2026-04-20T20:08:43.758835668Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:08:43.896172 containerd[1634]: time="2026-04-20T20:08:43.797391582Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 20 20:08:43.896172 containerd[1634]: time="2026-04-20T20:08:43.876836034Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 20 20:08:44.026988 containerd[1634]: time="2026-04-20T20:08:44.003786493Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:08:45.644457 kubelet[3713]: E0420 20:08:45.643781 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.751s" Apr 20 20:08:46.588238 kubelet[3713]: E0420 20:08:46.586869 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:08:47.557703 kubelet[3713]: E0420 20:08:47.432749 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:51.074619 kubelet[3713]: E0420 20:08:50.966793 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:54.266474 containerd[1634]: time="2026-04-20T20:08:54.135790411Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 Apr 20 20:08:54.466418 containerd[1634]: time="2026-04-20T20:08:54.348908272Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 20:08:54.964234 containerd[1634]: time="2026-04-20T20:08:54.810580496Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:08:55.238592 containerd[1634]: time="2026-04-20T20:08:55.195855587Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:08:57.794856 kubelet[3713]: E0420 20:08:57.793588 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:05.445401 containerd[1634]: time="2026-04-20T20:09:05.385873927Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 20 20:09:05.491534 containerd[1634]: time="2026-04-20T20:09:05.483800976Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 20 20:09:05.523394 containerd[1634]: time="2026-04-20T20:09:05.485640491Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:09:05.698672 containerd[1634]: time="2026-04-20T20:09:05.671616188Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:09:06.160600 containerd[1634]: time="2026-04-20T20:09:06.089922778Z" level=info msg="container event discarded" container=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d type=CONTAINER_STOPPED_EVENT Apr 20 20:09:07.594290 kubelet[3713]: E0420 20:09:07.168814 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:14.152387 kubelet[3713]: E0420 20:09:13.980735 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.27s" Apr 20 20:09:14.651351 kubelet[3713]: E0420 20:09:14.645911 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:15.690560 containerd[1634]: time="2026-04-20T20:09:15.674022693Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:09:16.046811 containerd[1634]: time="2026-04-20T20:09:15.988907164Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 20:09:16.081529 containerd[1634]: time="2026-04-20T20:09:16.050774125Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:09:16.135727 containerd[1634]: time="2026-04-20T20:09:16.105652507Z" level=error msg="ttrpc: received message on inactive stream" stream=83 Apr 20 20:09:16.470618 containerd[1634]: time="2026-04-20T20:09:16.465805518Z" level=info msg="container event discarded" container=850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9 type=CONTAINER_CREATED_EVENT Apr 20 20:09:17.280626 containerd[1634]: time="2026-04-20T20:09:17.259850509Z" level=info msg="container event discarded" container=ca99dbfee52f224e80360acc807314dc9062c30065f0675e98042f66e1549299 type=CONTAINER_DELETED_EVENT Apr 20 20:09:17.356535 containerd[1634]: time="2026-04-20T20:09:17.350879266Z" level=info msg="StopContainer for \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" with timeout 30 (s)" Apr 20 20:09:17.849327 containerd[1634]: time="2026-04-20T20:09:17.848518130Z" level=info msg="Stop container \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" with signal terminated" Apr 20 20:09:18.914357 containerd[1634]: time="2026-04-20T20:09:18.888938822Z" level=info msg="StopContainer for \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" with timeout 30 (s)" Apr 20 20:09:21.653845 containerd[1634]: time="2026-04-20T20:09:21.245886277Z" level=info msg="Stop container \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" with signal terminated" Apr 20 20:09:24.266476 kubelet[3713]: E0420 20:09:23.998862 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:27.532312 containerd[1634]: time="2026-04-20T20:09:27.519743214Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:09:27.587695 containerd[1634]: time="2026-04-20T20:09:27.542509942Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:09:27.587695 containerd[1634]: time="2026-04-20T20:09:27.530475473Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 20:09:28.053431 containerd[1634]: time="2026-04-20T20:09:27.899791197Z" level=error msg="ttrpc: received message on inactive stream" stream=83 Apr 20 20:09:29.621344 containerd[1634]: time="2026-04-20T20:09:29.570677819Z" level=error msg="get state for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="context deadline exceeded" Apr 20 20:09:29.629356 containerd[1634]: time="2026-04-20T20:09:29.622409244Z" level=warning msg="unknown status" status=0 Apr 20 20:09:29.961835 kubelet[3713]: E0420 20:09:29.862803 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:34.236518 containerd[1634]: time="2026-04-20T20:09:33.997872855Z" level=error msg="get state for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="context deadline exceeded" Apr 20 20:09:34.386920 containerd[1634]: time="2026-04-20T20:09:34.293036804Z" level=warning msg="unknown status" status=0 Apr 20 20:09:34.473621 kubelet[3713]: E0420 20:09:34.472679 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.78s" Apr 20 20:09:34.501784 containerd[1634]: time="2026-04-20T20:09:34.296889177Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 20 20:09:34.569490 containerd[1634]: time="2026-04-20T20:09:34.492555233Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 20 20:09:36.620345 kubelet[3713]: E0420 20:09:36.556294 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:37.582510 containerd[1634]: time="2026-04-20T20:09:37.569516356Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 Apr 20 20:09:37.853500 containerd[1634]: time="2026-04-20T20:09:37.832183200Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 20 20:09:37.981405 containerd[1634]: time="2026-04-20T20:09:37.962886030Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:09:44.400444 containerd[1634]: time="2026-04-20T20:09:44.324798119Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:09:46.294345 containerd[1634]: time="2026-04-20T20:09:46.280594021Z" level=info msg="container event discarded" container=850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9 type=CONTAINER_STARTED_EVENT Apr 20 20:09:46.682434 kubelet[3713]: E0420 20:09:46.284587 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:09:54.421162 containerd[1634]: time="2026-04-20T20:09:54.288740486Z" level=error msg="failed to delete task" error="context deadline exceeded" id=df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 Apr 20 20:09:54.572484 update_engine[1606]: I20260420 20:09:54.496271 1606 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 20:09:54.572484 update_engine[1606]: I20260420 20:09:54.548614 1606 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 20:09:54.752768 update_engine[1606]: I20260420 20:09:54.747836 1606 omaha_request_params.cc:62] Current group set to alpha Apr 20 20:09:54.802935 update_engine[1606]: I20260420 20:09:54.802606 1606 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 20:09:54.816614 update_engine[1606]: I20260420 20:09:54.815677 1606 update_attempter.cc:643] Scheduling an action processor start. Apr 20 20:09:54.828689 update_engine[1606]: I20260420 20:09:54.823654 1606 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 20:09:54.865389 update_engine[1606]: I20260420 20:09:54.863597 1606 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 20:09:54.871381 update_engine[1606]: I20260420 20:09:54.867798 1606 omaha_request_action.cc:272] Request: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.871381 update_engine[1606]: Apr 20 20:09:54.941605 update_engine[1606]: I20260420 20:09:54.874655 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:09:54.941605 update_engine[1606]: I20260420 20:09:54.890877 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:09:54.962495 update_engine[1606]: I20260420 20:09:54.961750 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:09:54.974866 update_engine[1606]: E20260420 20:09:54.972958 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:09:54.984768 update_engine[1606]: I20260420 20:09:54.982717 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 20:09:55.125517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7-rootfs.mount: Deactivated successfully. Apr 20 20:09:55.494601 containerd[1634]: time="2026-04-20T20:09:55.347528918Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:09:55.542634 containerd[1634]: time="2026-04-20T20:09:55.462858853Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 20 20:09:55.647342 containerd[1634]: time="2026-04-20T20:09:55.480885733Z" level=info msg="Kill container \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\"" Apr 20 20:09:55.769221 containerd[1634]: time="2026-04-20T20:09:55.667658560Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:09:56.151606 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 20:10:04.085619 kubelet[3713]: E0420 20:10:03.638872 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:04.624971 kubelet[3713]: E0420 20:10:04.620314 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.93s" Apr 20 20:10:05.390510 update_engine[1606]: I20260420 20:10:05.386552 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:10:05.405954 update_engine[1606]: I20260420 20:10:05.395570 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:10:05.410867 containerd[1634]: time="2026-04-20T20:10:05.402582312Z" level=info msg="Kill container \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\"" Apr 20 20:10:05.464383 update_engine[1606]: I20260420 20:10:05.460636 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:10:05.474368 update_engine[1606]: E20260420 20:10:05.472591 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:10:05.487333 update_engine[1606]: I20260420 20:10:05.480746 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 20:10:06.363341 containerd[1634]: time="2026-04-20T20:10:06.354921119Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 20 20:10:06.426384 containerd[1634]: time="2026-04-20T20:10:06.423593219Z" level=error msg="ttrpc: received message on inactive stream" stream=111 Apr 20 20:10:06.594310 containerd[1634]: time="2026-04-20T20:10:06.467773323Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:10:11.733573 kubelet[3713]: E0420 20:10:11.666680 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:15.389595 update_engine[1606]: I20260420 20:10:15.382788 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:10:15.399505 update_engine[1606]: I20260420 20:10:15.397419 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:10:15.424336 update_engine[1606]: I20260420 20:10:15.421707 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:10:15.486411 update_engine[1606]: E20260420 20:10:15.482676 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:10:15.521375 update_engine[1606]: I20260420 20:10:15.518901 1606 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 20:10:17.463521 kubelet[3713]: E0420 20:10:17.454739 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:17.501766 kubelet[3713]: E0420 20:10:17.499585 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.846s" Apr 20 20:10:19.691538 kubelet[3713]: E0420 20:10:19.690580 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.066s" Apr 20 20:10:20.889831 kubelet[3713]: E0420 20:10:20.832221 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:10:22.400790 kubelet[3713]: E0420 20:10:22.399815 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.512s" Apr 20 20:10:23.892983 kubelet[3713]: E0420 20:10:23.883894 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:25.014247 kubelet[3713]: E0420 20:10:24.974832 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.499s" Apr 20 20:10:25.389981 update_engine[1606]: I20260420 20:10:25.385704 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:10:25.389981 update_engine[1606]: I20260420 20:10:25.389838 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:10:25.424536 update_engine[1606]: I20260420 20:10:25.414572 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:10:25.429756 update_engine[1606]: E20260420 20:10:25.426887 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:10:25.442266 update_engine[1606]: I20260420 20:10:25.436586 1606 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 20:10:25.442266 update_engine[1606]: I20260420 20:10:25.439958 1606 omaha_request_action.cc:617] Omaha request response: Apr 20 20:10:25.475314 update_engine[1606]: E20260420 20:10:25.452580 1606 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 20:10:25.475314 update_engine[1606]: I20260420 20:10:25.454638 1606 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 20:10:25.475314 update_engine[1606]: I20260420 20:10:25.460379 1606 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:10:25.475314 update_engine[1606]: I20260420 20:10:25.470424 1606 update_attempter.cc:306] Processing Done. Apr 20 20:10:25.478554 update_engine[1606]: E20260420 20:10:25.476938 1606 update_attempter.cc:619] Update failed. Apr 20 20:10:25.517889 update_engine[1606]: I20260420 20:10:25.486245 1606 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 20:10:25.517889 update_engine[1606]: I20260420 20:10:25.488869 1606 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 20:10:25.517889 update_engine[1606]: I20260420 20:10:25.489419 1606 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 20:10:25.558496 update_engine[1606]: I20260420 20:10:25.519672 1606 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 20:10:25.558496 update_engine[1606]: I20260420 20:10:25.528631 1606 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 20:10:25.558496 update_engine[1606]: I20260420 20:10:25.529821 1606 omaha_request_action.cc:272] Request: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: Apr 20 20:10:25.558496 update_engine[1606]: I20260420 20:10:25.529969 1606 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:10:25.558496 update_engine[1606]: I20260420 20:10:25.538866 1606 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:10:25.671877 update_engine[1606]: I20260420 20:10:25.670763 1606 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:10:25.682920 update_engine[1606]: E20260420 20:10:25.682766 1606 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689388 1606 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689616 1606 omaha_request_action.cc:617] Omaha request response: Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689652 1606 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689659 1606 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689664 1606 update_attempter.cc:306] Processing Done. Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689670 1606 update_attempter.cc:310] Error event sent. Apr 20 20:10:25.695443 update_engine[1606]: I20260420 20:10:25.689730 1606 update_check_scheduler.cc:74] Next update check in 43m29s Apr 20 20:10:25.837810 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 20:10:25.937645 locksmithd[1698]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 20:10:28.397906 containerd[1634]: time="2026-04-20T20:10:28.381306212Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:10:29.059277 kubelet[3713]: E0420 20:10:29.057849 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4s" Apr 20 20:10:30.485365 kubelet[3713]: E0420 20:10:30.451548 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:32.859935 containerd[1634]: time="2026-04-20T20:10:32.486562547Z" level=info msg="container event discarded" container=66b0ce7fb3bf31adfcd3b7ba16cd34859d862c180a98c7d4f37f309bd63f8e53 type=CONTAINER_STOPPED_EVENT Apr 20 20:10:38.161188 containerd[1634]: time="2026-04-20T20:10:38.151660857Z" level=info msg="container event discarded" container=3935db30d0e8393f5e03cebba549568c9a4d21fc2edc9b0e83b838b31f2d8749 type=CONTAINER_DELETED_EVENT Apr 20 20:10:38.433542 kubelet[3713]: E0420 20:10:38.090992 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:39.142448 containerd[1634]: time="2026-04-20T20:10:38.672336853Z" level=error msg="failed to delete task" error="context deadline exceeded" id=df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 Apr 20 20:10:39.245992 containerd[1634]: time="2026-04-20T20:10:38.839985442Z" level=error msg="ttrpc: received message on inactive stream" stream=117 Apr 20 20:10:39.659567 containerd[1634]: time="2026-04-20T20:10:39.588699095Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:10:39.886359 containerd[1634]: time="2026-04-20T20:10:39.870964020Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:10:40.872263 containerd[1634]: time="2026-04-20T20:10:40.803707782Z" level=info msg="container event discarded" container=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 type=CONTAINER_CREATED_EVENT Apr 20 20:10:43.966472 kubelet[3713]: E0420 20:10:43.959671 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.778s" Apr 20 20:10:47.651999 containerd[1634]: time="2026-04-20T20:10:47.425979091Z" level=info msg="container event discarded" container=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 type=CONTAINER_STARTED_EVENT Apr 20 20:10:47.708999 kubelet[3713]: E0420 20:10:47.565780 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:50.188421 containerd[1634]: time="2026-04-20T20:10:50.177697464Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:10:50.569340 containerd[1634]: time="2026-04-20T20:10:50.100854295Z" level=error msg="ttrpc: received message on inactive stream" stream=121 Apr 20 20:10:50.763429 containerd[1634]: time="2026-04-20T20:10:50.612897582Z" level=error msg="ttrpc: received message on inactive stream" stream=125 Apr 20 20:10:54.684379 kubelet[3713]: E0420 20:10:54.669863 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:10:55.591991 kubelet[3713]: E0420 20:10:55.586900 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.597s" Apr 20 20:10:57.419430 kubelet[3713]: E0420 20:10:57.403395 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.771s" Apr 20 20:10:59.458476 kubelet[3713]: E0420 20:10:59.411457 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.886s" Apr 20 20:11:00.940457 kubelet[3713]: E0420 20:11:00.824716 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:01.885861 kubelet[3713]: E0420 20:11:01.861730 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.352s" Apr 20 20:11:03.386555 kubelet[3713]: E0420 20:11:03.380922 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.476s" Apr 20 20:11:06.611443 kubelet[3713]: E0420 20:11:06.608672 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:07.036431 kubelet[3713]: E0420 20:11:07.023977 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.123s" Apr 20 20:11:09.835583 kubelet[3713]: E0420 20:11:09.834706 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.012s" Apr 20 20:11:12.563413 kubelet[3713]: E0420 20:11:12.552871 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.707s" Apr 20 20:11:13.543346 kubelet[3713]: E0420 20:11:13.540854 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:16.514850 kubelet[3713]: E0420 20:11:16.512871 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.906s" Apr 20 20:11:18.655303 kubelet[3713]: E0420 20:11:18.644905 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.065s" Apr 20 20:11:19.295664 kubelet[3713]: E0420 20:11:19.283652 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:19.857144 kubelet[3713]: E0420 20:11:19.845971 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.137s" Apr 20 20:11:20.938841 kubelet[3713]: E0420 20:11:20.935653 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Apr 20 20:11:23.388021 kubelet[3713]: E0420 20:11:23.387700 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.515s" Apr 20 20:11:24.758298 kubelet[3713]: E0420 20:11:24.754133 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:29.398167 kubelet[3713]: E0420 20:11:29.258931 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.37s" Apr 20 20:11:30.921535 kubelet[3713]: E0420 20:11:30.918978 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:34.818552 kubelet[3713]: E0420 20:11:34.804651 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.351s" Apr 20 20:11:35.584317 kubelet[3713]: E0420 20:11:35.583668 3713 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:11:36.488159 kubelet[3713]: E0420 20:11:36.487741 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:36.498418 kubelet[3713]: E0420 20:11:36.497158 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.607s" Apr 20 20:11:40.847495 containerd[1634]: time="2026-04-20T20:11:40.796917581Z" level=info msg="container event discarded" container=850e2c3b9840f46d069a347870be59efc5165ce578e5666c196b8cdae63f35a9 type=CONTAINER_STOPPED_EVENT Apr 20 20:11:42.272401 kubelet[3713]: E0420 20:11:42.258722 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:42.956297 kubelet[3713]: E0420 20:11:42.953721 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.073s" Apr 20 20:11:43.831362 containerd[1634]: time="2026-04-20T20:11:43.799017812Z" level=info msg="container event discarded" container=de33f63f0b6aaba948e8e67799a5b4a0bcd540a8b2d6a3c7f3b6021fa789b60d type=CONTAINER_DELETED_EVENT Apr 20 20:11:44.336273 containerd[1634]: time="2026-04-20T20:11:44.333497253Z" level=info msg="TaskExit event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891}" Apr 20 20:11:45.703433 containerd[1634]: time="2026-04-20T20:11:45.702215632Z" level=info msg="container event discarded" container=df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 type=CONTAINER_CREATED_EVENT Apr 20 20:11:46.350451 kubelet[3713]: E0420 20:11:46.345028 3713 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" Apr 20 20:11:46.425950 kubelet[3713]: E0420 20:11:46.377362 3713 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="59dc6bef4fa0beb64c871485aab08cdf" containerName="kube-controller-manager" containerID="containerd://df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" gracePeriod=30 Apr 20 20:11:46.448136 kubelet[3713]: E0420 20:11:46.437973 3713 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7"} pod="kube-system/kube-controller-manager-localhost" Apr 20 20:11:46.651384 kubelet[3713]: E0420 20:11:46.546857 3713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="59dc6bef4fa0beb64c871485aab08cdf" Apr 20 20:11:46.677234 containerd[1634]: time="2026-04-20T20:11:46.600576392Z" level=error msg="StopContainer for \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" to be killed: wait container \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\": context canceled" Apr 20 20:11:47.199701 kubelet[3713]: E0420 20:11:47.195560 3713 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" Apr 20 20:11:47.278538 kubelet[3713]: E0420 20:11:47.252693 3713 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f8c463bc49d886414af4d8b2e5922b9f" containerName="kube-scheduler" containerID="containerd://ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" gracePeriod=30 Apr 20 20:11:47.478578 kubelet[3713]: E0420 20:11:47.345893 3713 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:11:47.508712 containerd[1634]: time="2026-04-20T20:11:47.266505278Z" level=error msg="StopContainer for \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" to be killed: wait container \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\": context canceled" Apr 20 20:11:47.561254 kubelet[3713]: E0420 20:11:47.548905 3713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f8c463bc49d886414af4d8b2e5922b9f" Apr 20 20:11:48.751287 kubelet[3713]: E0420 20:11:48.688677 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:54.590571 containerd[1634]: time="2026-04-20T20:11:54.507870762Z" level=error msg="get state for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="context deadline exceeded" Apr 20 20:11:54.629810 containerd[1634]: time="2026-04-20T20:11:54.560770727Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 20 20:11:54.643657 containerd[1634]: time="2026-04-20T20:11:54.629448771Z" level=warning msg="unknown status" status=0 Apr 20 20:11:54.726699 containerd[1634]: time="2026-04-20T20:11:54.724103628Z" level=error msg="failed to drain init process df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 20:11:54.740797 containerd[1634]: time="2026-04-20T20:11:54.735816537Z" level=error msg="failed to delete task" error="context deadline exceeded" id=df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 Apr 20 20:11:54.863255 kubelet[3713]: E0420 20:11:54.856553 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:11:54.918551 containerd[1634]: time="2026-04-20T20:11:54.894702049Z" level=error msg="ttrpc: received message on inactive stream" stream=137 Apr 20 20:11:55.063104 containerd[1634]: time="2026-04-20T20:11:55.061458710Z" level=error msg="Failed to handle backOff event container_id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" id:\"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" pid:4088 exit_status:1 exited_at:{seconds:1776715678 nanos:534774891} for df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:11:55.220449 containerd[1634]: time="2026-04-20T20:11:55.197948656Z" level=info msg="TaskExit event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594}" Apr 20 20:11:56.638296 containerd[1634]: time="2026-04-20T20:11:56.590906824Z" level=info msg="container event discarded" container=df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7 type=CONTAINER_STARTED_EVENT Apr 20 20:11:57.066961 kubelet[3713]: E0420 20:11:57.063692 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.101s" Apr 20 20:11:57.664381 containerd[1634]: time="2026-04-20T20:11:57.658680611Z" level=info msg="StopContainer for \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" with timeout 30 (s)" Apr 20 20:11:57.801390 containerd[1634]: time="2026-04-20T20:11:57.795772224Z" level=info msg="StopContainer for \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" with timeout 30 (s)" Apr 20 20:11:58.325181 containerd[1634]: time="2026-04-20T20:11:58.274798484Z" level=info msg="Skipping the sending of signal terminated to container \"df3012467d3b9f172163898d0ba53578c877b3f94063da75faadda18de7e33e7\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:12:00.550263 containerd[1634]: time="2026-04-20T20:12:00.546869245Z" level=error msg="get state for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="context deadline exceeded" Apr 20 20:12:00.583134 containerd[1634]: time="2026-04-20T20:12:00.549963613Z" level=warning msg="unknown status" status=0 Apr 20 20:12:01.328330 containerd[1634]: time="2026-04-20T20:12:00.592834447Z" level=info msg="Skipping the sending of signal terminated to container \"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:12:02.476400 kubelet[3713]: E0420 20:12:02.459915 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:12:04.771310 containerd[1634]: time="2026-04-20T20:12:04.712761144Z" level=error msg="ttrpc: received message on inactive stream" stream=143 Apr 20 20:12:05.815248 containerd[1634]: time="2026-04-20T20:12:05.814963889Z" level=error msg="get state for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="context deadline exceeded" Apr 20 20:12:05.958292 containerd[1634]: time="2026-04-20T20:12:05.938675316Z" level=error msg="ttrpc: received message on inactive stream" stream=145 Apr 20 20:12:05.958292 containerd[1634]: time="2026-04-20T20:12:05.945935568Z" level=warning msg="unknown status" status=0 Apr 20 20:12:06.867130 containerd[1634]: time="2026-04-20T20:12:06.762517888Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09 Apr 20 20:12:07.186654 containerd[1634]: time="2026-04-20T20:12:07.164663160Z" level=error msg="Failed to handle backOff event container_id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" id:\"ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09\" pid:4035 exit_status:1 exited_at:{seconds:1776715683 nanos:352679594} for ddbf0c5028245faa4e8bd06dbfa8628fdd733fc1ad83fe1683332b335b29ba09" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:12:07.227515 containerd[1634]: time="2026-04-20T20:12:07.175749926Z" level=error msg="ttrpc: received message on inactive stream" stream=147 Apr 20 20:12:07.494257 kubelet[3713]: E0420 20:12:07.488112 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.273s" Apr 20 20:12:08.522077 kubelet[3713]: E0420 20:12:08.517637 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:12:12.448504 kubelet[3713]: E0420 20:12:12.446971 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.876s" Apr 20 20:12:14.100208 kubelet[3713]: E0420 20:12:14.098444 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:12:15.338431 kubelet[3713]: E0420 20:12:15.327815 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.763s" Apr 20 20:12:19.180911 kubelet[3713]: E0420 20:12:19.175742 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.8s" Apr 20 20:12:19.644932 kubelet[3713]: E0420 20:12:19.642993 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:12:22.538417 kubelet[3713]: E0420 20:12:22.536762 3713 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.348s" Apr 20 20:12:24.717853 kubelet[3713]: E0420 20:12:24.717257 3713 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"