Apr 13 23:34:08.289143 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 23:34:08.289174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:34:08.289188 kernel: BIOS-provided physical RAM map: Apr 13 23:34:08.289197 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 23:34:08.289204 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 13 23:34:08.289212 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 13 23:34:08.289223 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 13 23:34:08.289231 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 13 23:34:08.289239 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 13 23:34:08.289248 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 13 23:34:08.289258 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 13 23:34:08.289266 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 13 23:34:08.289275 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 13 23:34:08.289283 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 13 23:34:08.289294 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 13 23:34:08.289304 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 13 23:34:08.289315 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 13 23:34:08.289324 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 13 23:34:08.289332 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 13 23:34:08.289341 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 23:34:08.289501 kernel: NX (Execute Disable) protection: active Apr 13 23:34:08.289510 kernel: APIC: Static calls initialized Apr 13 23:34:08.289520 kernel: efi: EFI v2.7 by EDK II Apr 13 23:34:08.289529 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 13 23:34:08.289538 kernel: SMBIOS 2.8 present. Apr 13 23:34:08.289547 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 13 23:34:08.289556 kernel: Hypervisor detected: KVM Apr 13 23:34:08.289568 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 23:34:08.289577 kernel: kvm-clock: using sched offset of 6138227771 cycles Apr 13 23:34:08.289587 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 23:34:08.289596 kernel: tsc: Detected 2793.438 MHz processor Apr 13 23:34:08.289606 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 23:34:08.289615 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 23:34:08.289625 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 13 23:34:08.289634 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 23:34:08.289642 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 23:34:08.289652 kernel: Using GB pages for direct mapping Apr 13 23:34:08.289661 kernel: Secure boot disabled Apr 13 23:34:08.289671 kernel: ACPI: Early table checksum verification disabled Apr 13 23:34:08.289680 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 13 23:34:08.289694 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 23:34:08.289704 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289714 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289726 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 13 23:34:08.289736 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289746 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289756 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289766 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:34:08.289775 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 23:34:08.289785 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 13 23:34:08.289797 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 13 23:34:08.289807 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 13 23:34:08.289816 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 13 23:34:08.289826 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 13 23:34:08.289835 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 13 23:34:08.289845 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 13 23:34:08.289855 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 13 23:34:08.289865 kernel: No NUMA configuration found Apr 13 23:34:08.289875 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 13 23:34:08.289887 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 13 23:34:08.289897 kernel: Zone ranges: Apr 13 23:34:08.289906 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 23:34:08.289916 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 13 23:34:08.290019 kernel: Normal empty Apr 13 23:34:08.290029 kernel: Movable zone start for each node Apr 13 23:34:08.290039 kernel: Early memory node ranges Apr 13 23:34:08.290049 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 23:34:08.290059 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 13 23:34:08.290068 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 13 23:34:08.290080 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 13 23:34:08.290089 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 13 23:34:08.290099 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 13 23:34:08.290109 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 13 23:34:08.290118 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:34:08.290128 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 23:34:08.290138 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 13 23:34:08.290147 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:34:08.290157 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 13 23:34:08.290169 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 23:34:08.290179 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 13 23:34:08.290189 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 23:34:08.290199 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 23:34:08.290209 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 23:34:08.290219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 23:34:08.290229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 23:34:08.290239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 23:34:08.290249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 23:34:08.290260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 23:34:08.290270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 23:34:08.290280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 23:34:08.290290 kernel: TSC deadline timer available Apr 13 23:34:08.290299 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 23:34:08.290309 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 23:34:08.290319 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 23:34:08.290329 kernel: kvm-guest: setup PV sched yield Apr 13 23:34:08.290338 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 23:34:08.290349 kernel: Booting paravirtualized kernel on KVM Apr 13 23:34:08.290359 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 23:34:08.290369 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 23:34:08.290579 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 23:34:08.290588 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 23:34:08.290597 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 23:34:08.290607 kernel: kvm-guest: PV spinlocks enabled Apr 13 23:34:08.290617 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 23:34:08.290628 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:34:08.290641 kernel: random: crng init done Apr 13 23:34:08.290651 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 23:34:08.290661 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 23:34:08.290671 kernel: Fallback order for Node 0: 0 Apr 13 23:34:08.290681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 13 23:34:08.290691 kernel: Policy zone: DMA32 Apr 13 23:34:08.290701 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 23:34:08.290712 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172120K reserved, 0K cma-reserved) Apr 13 23:34:08.290724 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 23:34:08.290734 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 23:34:08.290744 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 23:34:08.290753 kernel: Dynamic Preempt: voluntary Apr 13 23:34:08.290826 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 23:34:08.290846 kernel: rcu: RCU event tracing is enabled. Apr 13 23:34:08.290858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 23:34:08.290869 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 23:34:08.290880 kernel: Rude variant of Tasks RCU enabled. Apr 13 23:34:08.290890 kernel: Tracing variant of Tasks RCU enabled. Apr 13 23:34:08.290901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 23:34:08.290912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 23:34:08.290951 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 23:34:08.290962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 23:34:08.290973 kernel: Console: colour dummy device 80x25 Apr 13 23:34:08.290984 kernel: printk: console [ttyS0] enabled Apr 13 23:34:08.290995 kernel: ACPI: Core revision 20230628 Apr 13 23:34:08.291008 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 23:34:08.291019 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 23:34:08.291030 kernel: x2apic enabled Apr 13 23:34:08.291041 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 23:34:08.291052 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 23:34:08.291063 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 23:34:08.291074 kernel: kvm-guest: setup PV IPIs Apr 13 23:34:08.291085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 23:34:08.291096 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:34:08.291109 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 23:34:08.291120 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 23:34:08.291131 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 23:34:08.291142 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 23:34:08.291153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 23:34:08.291164 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 23:34:08.291175 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 23:34:08.291186 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 23:34:08.291197 kernel: RETBleed: Vulnerable Apr 13 23:34:08.291210 kernel: Speculative Store Bypass: Vulnerable Apr 13 23:34:08.291221 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 23:34:08.291232 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 23:34:08.291243 kernel: active return thunk: its_return_thunk Apr 13 23:34:08.291253 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 23:34:08.291265 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 23:34:08.291276 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 23:34:08.291287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 23:34:08.291298 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 23:34:08.291311 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 23:34:08.291321 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 23:34:08.291332 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 23:34:08.291342 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 23:34:08.291353 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 23:34:08.291364 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 23:34:08.291431 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 23:34:08.291445 kernel: Freeing SMP alternatives memory: 32K Apr 13 23:34:08.291456 kernel: pid_max: default: 32768 minimum: 301 Apr 13 23:34:08.291558 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 23:34:08.291569 kernel: landlock: Up and running. Apr 13 23:34:08.291579 kernel: SELinux: Initializing. Apr 13 23:34:08.291589 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:34:08.291599 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:34:08.291608 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 23:34:08.291618 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:34:08.291628 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:34:08.291639 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:34:08.291652 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 23:34:08.291663 kernel: signal: max sigframe size: 3632 Apr 13 23:34:08.291674 kernel: rcu: Hierarchical SRCU implementation. Apr 13 23:34:08.291685 kernel: rcu: Max phase no-delay instances is 400. Apr 13 23:34:08.291696 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 23:34:08.291706 kernel: smp: Bringing up secondary CPUs ... Apr 13 23:34:08.291717 kernel: smpboot: x86: Booting SMP configuration: Apr 13 23:34:08.291726 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 23:34:08.291736 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 23:34:08.291749 kernel: smpboot: Max logical packages: 1 Apr 13 23:34:08.291760 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 23:34:08.291771 kernel: devtmpfs: initialized Apr 13 23:34:08.291781 kernel: x86/mm: Memory block size: 128MB Apr 13 23:34:08.291851 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 13 23:34:08.291862 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 13 23:34:08.291873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 13 23:34:08.291884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 13 23:34:08.291895 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 13 23:34:08.291908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 23:34:08.291943 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 23:34:08.291952 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 23:34:08.291961 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 23:34:08.291969 kernel: audit: initializing netlink subsys (disabled) Apr 13 23:34:08.291978 kernel: audit: type=2000 audit(1776123246.577:1): state=initialized audit_enabled=0 res=1 Apr 13 23:34:08.291986 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 23:34:08.291995 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 23:34:08.292006 kernel: cpuidle: using governor menu Apr 13 23:34:08.292015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 23:34:08.292024 kernel: dca service started, version 1.12.1 Apr 13 23:34:08.292034 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 23:34:08.292043 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 23:34:08.292054 kernel: PCI: Using configuration type 1 for base access Apr 13 23:34:08.292063 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 23:34:08.292073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 23:34:08.292083 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 23:34:08.292095 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 23:34:08.292104 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 23:34:08.292114 kernel: ACPI: Added _OSI(Module Device) Apr 13 23:34:08.292124 kernel: ACPI: Added _OSI(Processor Device) Apr 13 23:34:08.292133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 23:34:08.292143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 23:34:08.292152 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 23:34:08.292161 kernel: ACPI: Interpreter enabled Apr 13 23:34:08.292171 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 23:34:08.292183 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 23:34:08.292193 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 23:34:08.292202 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 23:34:08.292212 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 23:34:08.292221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 23:34:08.292764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 23:34:08.293122 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 23:34:08.293218 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 23:34:08.293236 kernel: PCI host bridge to bus 0000:00 Apr 13 23:34:08.293359 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 23:34:08.296124 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 23:34:08.296230 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 23:34:08.296335 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 23:34:08.296457 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 23:34:08.296535 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 13 23:34:08.296618 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 23:34:08.296722 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 23:34:08.296819 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 23:34:08.296906 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 13 23:34:08.298195 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 13 23:34:08.298313 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 23:34:08.298468 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 23:34:08.298556 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 23:34:08.298655 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 23:34:08.298742 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 13 23:34:08.298829 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 13 23:34:08.298915 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 13 23:34:08.300234 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 23:34:08.300372 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 13 23:34:08.300624 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 13 23:34:08.300710 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 13 23:34:08.300812 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 23:34:08.300899 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 13 23:34:08.301087 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 13 23:34:08.301224 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 13 23:34:08.301314 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 13 23:34:08.301469 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 23:34:08.301558 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 23:34:08.301648 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 23:34:08.301733 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 13 23:34:08.301818 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 13 23:34:08.301909 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 23:34:08.302172 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 13 23:34:08.302186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 23:34:08.302196 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 23:34:08.302207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 23:34:08.302217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 23:34:08.302227 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 23:34:08.302237 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 23:34:08.302247 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 23:34:08.302261 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 23:34:08.302270 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 23:34:08.302281 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 23:34:08.302290 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 23:34:08.302300 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 23:34:08.302310 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 23:34:08.302319 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 23:34:08.302329 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 23:34:08.302338 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 23:34:08.302350 kernel: iommu: Default domain type: Translated Apr 13 23:34:08.302360 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 23:34:08.302370 kernel: efivars: Registered efivars operations Apr 13 23:34:08.302424 kernel: PCI: Using ACPI for IRQ routing Apr 13 23:34:08.302435 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 23:34:08.302445 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 13 23:34:08.302455 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 13 23:34:08.302464 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 13 23:34:08.302474 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 13 23:34:08.302565 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 23:34:08.302647 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 23:34:08.302730 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 23:34:08.302742 kernel: vgaarb: loaded Apr 13 23:34:08.302752 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 23:34:08.302763 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 23:34:08.302773 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 23:34:08.302782 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 23:34:08.302794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 23:34:08.302804 kernel: pnp: PnP ACPI init Apr 13 23:34:08.302892 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 23:34:08.302905 kernel: pnp: PnP ACPI: found 6 devices Apr 13 23:34:08.302914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 23:34:08.302952 kernel: NET: Registered PF_INET protocol family Apr 13 23:34:08.302962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 23:34:08.302971 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 23:34:08.302980 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 23:34:08.302993 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 23:34:08.303000 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 23:34:08.303009 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 23:34:08.303019 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:34:08.303028 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:34:08.303038 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 23:34:08.303134 kernel: NET: Registered PF_XDP protocol family Apr 13 23:34:08.303229 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 13 23:34:08.303318 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 13 23:34:08.303447 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 23:34:08.303548 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 23:34:08.303613 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 23:34:08.303674 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 23:34:08.303734 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 23:34:08.303807 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 13 23:34:08.303817 kernel: PCI: CLS 0 bytes, default 64 Apr 13 23:34:08.303829 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 23:34:08.303837 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:34:08.303845 kernel: Initialise system trusted keyrings Apr 13 23:34:08.303854 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 23:34:08.303861 kernel: Key type asymmetric registered Apr 13 23:34:08.303869 kernel: Asymmetric key parser 'x509' registered Apr 13 23:34:08.303877 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 23:34:08.303886 kernel: io scheduler mq-deadline registered Apr 13 23:34:08.303897 kernel: io scheduler kyber registered Apr 13 23:34:08.303907 kernel: io scheduler bfq registered Apr 13 23:34:08.303916 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 23:34:08.303950 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 23:34:08.303958 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 23:34:08.303966 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 23:34:08.303975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 23:34:08.303983 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 23:34:08.303991 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 23:34:08.303999 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 23:34:08.304010 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 23:34:08.304186 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 23:34:08.304200 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 23:34:08.304269 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 23:34:08.304333 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T23:34:07 UTC (1776123247) Apr 13 23:34:08.304443 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 13 23:34:08.304454 kernel: intel_pstate: CPU model not supported Apr 13 23:34:08.304462 kernel: efifb: probing for efifb Apr 13 23:34:08.304473 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 13 23:34:08.304481 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 13 23:34:08.304489 kernel: efifb: scrolling: redraw Apr 13 23:34:08.304497 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 13 23:34:08.304505 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 23:34:08.304513 kernel: fb0: EFI VGA frame buffer device Apr 13 23:34:08.304537 kernel: pstore: Using crash dump compression: deflate Apr 13 23:34:08.304547 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 23:34:08.304555 kernel: NET: Registered PF_INET6 protocol family Apr 13 23:34:08.304567 kernel: Segment Routing with IPv6 Apr 13 23:34:08.304575 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 23:34:08.304584 kernel: NET: Registered PF_PACKET protocol family Apr 13 23:34:08.304592 kernel: Key type dns_resolver registered Apr 13 23:34:08.304600 kernel: IPI shorthand broadcast: enabled Apr 13 23:34:08.304608 kernel: sched_clock: Marking stable (1369021494, 294486680)->(1798359182, -134851008) Apr 13 23:34:08.304617 kernel: registered taskstats version 1 Apr 13 23:34:08.304625 kernel: Loading compiled-in X.509 certificates Apr 13 23:34:08.304633 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 23:34:08.304643 kernel: Key type .fscrypt registered Apr 13 23:34:08.304652 kernel: Key type fscrypt-provisioning registered Apr 13 23:34:08.304660 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 23:34:08.304668 kernel: ima: Allocated hash algorithm: sha1 Apr 13 23:34:08.304677 kernel: ima: No architecture policies found Apr 13 23:34:08.304685 kernel: clk: Disabling unused clocks Apr 13 23:34:08.304693 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 23:34:08.304701 kernel: Write protecting the kernel read-only data: 36864k Apr 13 23:34:08.304709 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 23:34:08.304720 kernel: Run /init as init process Apr 13 23:34:08.304728 kernel: with arguments: Apr 13 23:34:08.304736 kernel: /init Apr 13 23:34:08.304744 kernel: with environment: Apr 13 23:34:08.304752 kernel: HOME=/ Apr 13 23:34:08.304763 kernel: TERM=linux Apr 13 23:34:08.304776 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:34:08.304790 systemd[1]: Detected virtualization kvm. Apr 13 23:34:08.304804 systemd[1]: Detected architecture x86-64. Apr 13 23:34:08.304813 systemd[1]: Running in initrd. Apr 13 23:34:08.304822 systemd[1]: No hostname configured, using default hostname. Apr 13 23:34:08.304831 systemd[1]: Hostname set to . Apr 13 23:34:08.304840 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:34:08.304850 systemd[1]: Queued start job for default target initrd.target. Apr 13 23:34:08.304859 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:34:08.304869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:34:08.304878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 23:34:08.304889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:34:08.304898 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 23:34:08.304907 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 23:34:08.304945 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 23:34:08.304954 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 23:34:08.304964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:34:08.304973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:34:08.304982 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:34:08.304993 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:34:08.305003 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:34:08.305014 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:34:08.305027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:34:08.305036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:34:08.305045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:34:08.305054 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:34:08.305063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:34:08.305072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:34:08.305081 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:34:08.305090 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:34:08.305101 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 23:34:08.305110 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:34:08.305119 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 23:34:08.305128 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 23:34:08.305137 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:34:08.305214 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:34:08.305224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:34:08.305255 systemd-journald[195]: Collecting audit messages is disabled. Apr 13 23:34:08.305280 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 23:34:08.305289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:34:08.305299 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 23:34:08.305311 systemd-journald[195]: Journal started Apr 13 23:34:08.305331 systemd-journald[195]: Runtime Journal (/run/log/journal/adb837d1ccf94f8b87a3d9e02b4c3a5b) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:34:08.311892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:34:08.319850 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:34:08.324621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:34:08.325092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:34:08.333863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:34:08.353117 systemd-modules-load[196]: Inserted module 'overlay' Apr 13 23:34:08.358738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:34:08.365233 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:34:08.384194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:08.402870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:34:08.428152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:34:08.489813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 23:34:08.510270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 23:34:08.516339 kernel: Bridge firewalling registered Apr 13 23:34:08.516616 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 13 23:34:08.522780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:34:08.531824 dracut-cmdline[224]: dracut-dracut-053 Apr 13 23:34:08.539676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:34:08.548129 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:34:08.587626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:34:08.600818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:34:08.641139 systemd-resolved[254]: Positive Trust Anchors: Apr 13 23:34:08.641265 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:34:08.641300 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:34:08.644800 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 13 23:34:08.646316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:34:08.659138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:34:08.748728 kernel: SCSI subsystem initialized Apr 13 23:34:08.764673 kernel: Loading iSCSI transport class v2.0-870. Apr 13 23:34:08.788637 kernel: iscsi: registered transport (tcp) Apr 13 23:34:08.823905 kernel: iscsi: registered transport (qla4xxx) Apr 13 23:34:08.824187 kernel: QLogic iSCSI HBA Driver Apr 13 23:34:08.938563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 23:34:08.959109 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 23:34:09.014504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 23:34:09.014608 kernel: device-mapper: uevent: version 1.0.3 Apr 13 23:34:09.017613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 23:34:09.122546 kernel: raid6: avx512x4 gen() 38295 MB/s Apr 13 23:34:09.140569 kernel: raid6: avx512x2 gen() 38077 MB/s Apr 13 23:34:09.158749 kernel: raid6: avx512x1 gen() 34125 MB/s Apr 13 23:34:09.176474 kernel: raid6: avx2x4 gen() 31035 MB/s Apr 13 23:34:09.195466 kernel: raid6: avx2x2 gen() 25782 MB/s Apr 13 23:34:09.215504 kernel: raid6: avx2x1 gen() 20723 MB/s Apr 13 23:34:09.215600 kernel: raid6: using algorithm avx512x4 gen() 38295 MB/s Apr 13 23:34:09.237478 kernel: raid6: .... xor() 8600 MB/s, rmw enabled Apr 13 23:34:09.237566 kernel: raid6: using avx512x2 recovery algorithm Apr 13 23:34:09.265638 kernel: xor: automatically using best checksumming function avx Apr 13 23:34:09.515896 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 23:34:09.614187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:34:09.631058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:34:09.656168 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 13 23:34:09.662886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:34:09.683889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 23:34:09.708996 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 13 23:34:09.780197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:34:09.804964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:34:09.869133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:34:09.899757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 23:34:09.916122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 23:34:09.927830 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:34:09.941713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:34:09.947559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:34:09.974845 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 23:34:09.996092 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 23:34:10.003497 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 23:34:10.007666 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:34:10.020029 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 23:34:10.031211 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 23:34:10.031284 kernel: GPT:9289727 != 19775487 Apr 13 23:34:10.031297 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 23:34:10.031511 kernel: GPT:9289727 != 19775487 Apr 13 23:34:10.037773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:34:10.052241 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 23:34:10.052272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:34:10.038116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:34:10.053847 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:34:10.063568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:34:10.064298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:10.078474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:34:10.186974 kernel: hrtimer: interrupt took 6042457 ns Apr 13 23:34:10.189054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:34:10.213861 kernel: libata version 3.00 loaded. Apr 13 23:34:10.228493 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 23:34:10.236997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:10.263550 kernel: AES CTR mode by8 optimization enabled Apr 13 23:34:10.263590 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (474) Apr 13 23:34:10.275474 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Apr 13 23:34:10.275487 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 23:34:10.276025 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 23:34:10.276056 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 23:34:10.281040 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 23:34:10.290465 kernel: scsi host0: ahci Apr 13 23:34:10.293706 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 23:34:10.312203 kernel: scsi host1: ahci Apr 13 23:34:10.308559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 23:34:10.374011 kernel: scsi host2: ahci Apr 13 23:34:10.374285 kernel: scsi host3: ahci Apr 13 23:34:10.374625 kernel: scsi host4: ahci Apr 13 23:34:10.374734 kernel: scsi host5: ahci Apr 13 23:34:10.374821 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 13 23:34:10.374832 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 13 23:34:10.374842 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 13 23:34:10.374852 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 13 23:34:10.374862 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 13 23:34:10.374872 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 13 23:34:10.341609 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 23:34:10.366832 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 23:34:10.380884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:34:10.404347 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 23:34:10.407427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:34:10.422092 disk-uuid[556]: Primary Header is updated. Apr 13 23:34:10.422092 disk-uuid[556]: Secondary Entries is updated. Apr 13 23:34:10.422092 disk-uuid[556]: Secondary Header is updated. Apr 13 23:34:10.432463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:34:10.442552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:34:10.690552 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 23:34:10.694539 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 23:34:10.698538 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 23:34:10.705178 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 23:34:10.705273 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 23:34:10.708756 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 23:34:10.715108 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 23:34:10.718361 kernel: ata3.00: applying bridge limits Apr 13 23:34:10.720137 kernel: ata3.00: configured for UDMA/100 Apr 13 23:34:10.730515 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 23:34:10.828583 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 23:34:10.829164 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 23:34:10.847813 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 23:34:11.474963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:34:11.481335 disk-uuid[560]: The operation has completed successfully. Apr 13 23:34:11.541683 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 23:34:11.541845 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 23:34:11.596344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 23:34:11.608360 sh[597]: Success Apr 13 23:34:11.703550 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 23:34:11.786160 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 23:34:11.808581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 23:34:11.820999 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 23:34:11.851664 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 23:34:11.851741 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:34:11.851754 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 23:34:11.856092 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 23:34:11.861155 kernel: BTRFS info (device dm-0): using free space tree Apr 13 23:34:11.891341 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 23:34:11.898687 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 23:34:11.915887 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 23:34:11.921984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 23:34:11.952642 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:34:11.952708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:34:11.957358 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:34:11.977588 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:34:12.013574 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 23:34:12.024131 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:34:12.057304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 23:34:12.092114 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 23:34:12.320761 ignition[691]: Ignition 2.19.0 Apr 13 23:34:12.320804 ignition[691]: Stage: fetch-offline Apr 13 23:34:12.320844 ignition[691]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:12.320855 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:12.321132 ignition[691]: parsed url from cmdline: "" Apr 13 23:34:12.321137 ignition[691]: no config URL provided Apr 13 23:34:12.321143 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 23:34:12.321156 ignition[691]: no config at "/usr/lib/ignition/user.ign" Apr 13 23:34:12.321188 ignition[691]: op(1): [started] loading QEMU firmware config module Apr 13 23:34:12.321198 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 23:34:12.387736 ignition[691]: op(1): [finished] loading QEMU firmware config module Apr 13 23:34:12.492299 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:34:12.515806 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:34:12.552714 systemd-networkd[785]: lo: Link UP Apr 13 23:34:12.552756 systemd-networkd[785]: lo: Gained carrier Apr 13 23:34:12.554337 systemd-networkd[785]: Enumeration completed Apr 13 23:34:12.555839 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:34:12.555842 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:34:12.557751 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:34:12.575171 systemd[1]: Reached target network.target - Network. Apr 13 23:34:12.579614 systemd-networkd[785]: eth0: Link UP Apr 13 23:34:12.579625 systemd-networkd[785]: eth0: Gained carrier Apr 13 23:34:12.579640 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:34:12.626539 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:34:12.896073 ignition[691]: parsing config with SHA512: 5b4de7d4f4827a167e7939550578cbb4c67348ade7fca3ad32760094d3c32483cce7831300fb5246dda0cf1a4cbd5bb5c18ab60f8aa9fbed4859bb65c4028c07 Apr 13 23:34:12.910688 unknown[691]: fetched base config from "system" Apr 13 23:34:12.911669 ignition[691]: fetch-offline: fetch-offline passed Apr 13 23:34:12.910700 unknown[691]: fetched user config from "qemu" Apr 13 23:34:12.911734 ignition[691]: Ignition finished successfully Apr 13 23:34:12.919008 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:34:12.927069 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 23:34:12.956868 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 23:34:12.999769 ignition[789]: Ignition 2.19.0 Apr 13 23:34:13.000436 ignition[789]: Stage: kargs Apr 13 23:34:13.000672 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:13.000681 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:13.006625 ignition[789]: kargs: kargs passed Apr 13 23:34:13.026371 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 23:34:13.006701 ignition[789]: Ignition finished successfully Apr 13 23:34:13.057193 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 23:34:13.124141 ignition[797]: Ignition 2.19.0 Apr 13 23:34:13.124190 ignition[797]: Stage: disks Apr 13 23:34:13.136682 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:13.136777 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:13.148664 ignition[797]: disks: disks passed Apr 13 23:34:13.150659 ignition[797]: Ignition finished successfully Apr 13 23:34:13.162741 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 23:34:13.176625 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 23:34:13.176799 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:34:13.199238 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:34:13.208674 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:34:13.213711 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:34:13.234994 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 23:34:13.310769 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 23:34:13.322705 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 23:34:13.396209 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 23:34:13.831752 systemd-networkd[785]: eth0: Gained IPv6LL Apr 13 23:34:14.009197 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 23:34:14.011338 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 23:34:14.027201 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 23:34:14.047338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:34:14.074275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 23:34:14.098707 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 23:34:14.122274 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Apr 13 23:34:14.098840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 23:34:14.162794 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:34:14.162826 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:34:14.162840 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:34:14.164665 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:34:14.098880 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:34:14.107888 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 23:34:14.116279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 23:34:14.164451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:34:14.298271 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 23:34:14.335983 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 13 23:34:14.363269 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 23:34:14.384354 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 23:34:14.778538 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 23:34:14.998754 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 23:34:15.051467 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:34:15.057038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 23:34:15.063348 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 23:34:15.226554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 23:34:15.587706 ignition[928]: INFO : Ignition 2.19.0 Apr 13 23:34:15.587706 ignition[928]: INFO : Stage: mount Apr 13 23:34:15.606704 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:15.606704 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:15.606704 ignition[928]: INFO : mount: mount passed Apr 13 23:34:15.606704 ignition[928]: INFO : Ignition finished successfully Apr 13 23:34:15.651757 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 23:34:15.671715 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 23:34:15.772539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:34:15.856565 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 13 23:34:15.872263 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:34:15.872360 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:34:15.872547 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:34:15.909026 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:34:15.912187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:34:16.133130 ignition[961]: INFO : Ignition 2.19.0 Apr 13 23:34:16.133130 ignition[961]: INFO : Stage: files Apr 13 23:34:16.152005 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:16.152005 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:16.162939 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 13 23:34:16.175551 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 23:34:16.175551 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 23:34:16.197225 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 23:34:16.208355 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 23:34:16.225321 unknown[961]: wrote ssh authorized keys file for user: core Apr 13 23:34:16.231849 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 23:34:16.246008 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 23:34:16.246008 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 23:34:16.264166 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:34:16.264166 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 23:34:16.480721 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 23:34:17.470598 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:34:17.485319 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 23:34:17.485319 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 13 23:34:18.292002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 13 23:34:19.695450 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:34:19.707159 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:34:19.801294 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:34:19.801294 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:34:19.801294 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:34:19.801294 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:34:19.801294 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 23:34:19.890600 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 13 23:34:26.979681 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:34:26.993166 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 13 23:34:27.000362 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 23:34:27.014524 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 23:34:27.033659 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 13 23:34:27.114466 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 13 23:34:27.131158 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 23:34:27.381648 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:34:27.416106 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:34:27.421545 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 23:34:27.421545 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 13 23:34:27.421545 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 23:34:27.421545 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:34:27.421545 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:34:27.421545 ignition[961]: INFO : files: files passed Apr 13 23:34:27.421545 ignition[961]: INFO : Ignition finished successfully Apr 13 23:34:27.421601 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 23:34:27.476909 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 23:34:27.492875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 23:34:27.497854 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 23:34:27.497956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 23:34:27.546241 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 23:34:27.556656 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:34:27.556656 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:34:27.569178 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:34:27.586065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:34:27.595202 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 23:34:27.618799 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 23:34:27.711766 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 23:34:27.712793 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 23:34:27.724255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 23:34:27.728191 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 23:34:27.781707 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 23:34:27.803616 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 23:34:27.848286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:34:27.871826 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 23:34:27.906618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:34:27.911950 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:34:27.920825 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 23:34:27.998662 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 23:34:27.999099 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:34:28.015999 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 23:34:28.020842 systemd[1]: Stopped target basic.target - Basic System. Apr 13 23:34:28.026624 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 23:34:28.056270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:34:28.067691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 23:34:28.072489 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 23:34:28.088513 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:34:28.088814 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 23:34:28.111930 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 23:34:28.120612 systemd[1]: Stopped target swap.target - Swaps. Apr 13 23:34:28.125712 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 23:34:28.125906 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:34:28.141644 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:34:28.151994 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:34:28.162135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 23:34:28.163342 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:34:28.174645 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 23:34:28.175882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 23:34:28.177490 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 23:34:28.177644 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:34:28.177985 systemd[1]: Stopped target paths.target - Path Units. Apr 13 23:34:28.180522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 23:34:28.183662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:34:28.197687 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 23:34:28.224742 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 23:34:28.287273 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 23:34:28.287466 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:34:28.308340 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 23:34:28.308860 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:34:28.314618 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 23:34:28.315062 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:34:28.333941 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 23:34:28.337717 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 23:34:28.372038 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 23:34:28.376079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 23:34:28.380707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:34:28.394686 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 23:34:28.397995 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 23:34:28.398546 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:34:28.407982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 23:34:28.408898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:34:28.426712 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 23:34:28.426855 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 23:34:28.446651 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 23:34:28.453673 ignition[1015]: INFO : Ignition 2.19.0 Apr 13 23:34:28.453673 ignition[1015]: INFO : Stage: umount Apr 13 23:34:28.474463 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:34:28.474463 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:34:28.474463 ignition[1015]: INFO : umount: umount passed Apr 13 23:34:28.474463 ignition[1015]: INFO : Ignition finished successfully Apr 13 23:34:28.456816 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 23:34:28.457092 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 23:34:28.464760 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 23:34:28.465299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 23:34:28.475873 systemd[1]: Stopped target network.target - Network. Apr 13 23:34:28.483692 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 23:34:28.485213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 23:34:28.490993 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 23:34:28.491153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 23:34:28.515930 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 23:34:28.519691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 23:34:28.574780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 23:34:28.574902 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 23:34:28.607850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 23:34:28.608338 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 23:34:28.619815 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 23:34:28.623675 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 23:34:28.646655 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 13 23:34:28.647654 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 23:34:28.648083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 23:34:28.667818 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 23:34:28.668944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 23:34:28.685668 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 23:34:28.685807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:34:28.712001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 23:34:28.721870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 23:34:28.722132 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:34:28.725307 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:34:28.725472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:34:28.725580 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 23:34:28.725616 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 23:34:28.731917 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 23:34:28.732063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:34:28.802259 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:34:28.858335 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 23:34:28.858834 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 23:34:28.929230 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 23:34:28.930181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:34:28.953843 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 23:34:28.955497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 23:34:28.966492 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 23:34:28.966709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:34:28.970144 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 23:34:28.970218 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:34:29.005658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 23:34:29.007706 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 23:34:29.032067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:34:29.032680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:34:29.120999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 23:34:29.131069 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 23:34:29.131458 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:34:29.138660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:34:29.138795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:29.162813 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 23:34:29.163285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 23:34:29.177904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 23:34:29.198756 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 23:34:29.220939 systemd[1]: Switching root. Apr 13 23:34:29.260161 systemd-journald[195]: Journal stopped Apr 13 23:34:32.891803 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 13 23:34:32.891910 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 23:34:32.891930 kernel: SELinux: policy capability open_perms=1 Apr 13 23:34:32.891951 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 23:34:32.891966 kernel: SELinux: policy capability always_check_network=0 Apr 13 23:34:32.891983 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 23:34:32.891996 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 23:34:32.892009 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 23:34:32.895288 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 23:34:32.895338 kernel: audit: type=1403 audit(1776123269.627:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 23:34:32.895358 systemd[1]: Successfully loaded SELinux policy in 72.226ms. Apr 13 23:34:32.895465 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 41.537ms. Apr 13 23:34:32.895485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:34:32.895507 systemd[1]: Detected virtualization kvm. Apr 13 23:34:32.895521 systemd[1]: Detected architecture x86-64. Apr 13 23:34:32.895535 systemd[1]: Detected first boot. Apr 13 23:34:32.895549 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:34:32.895563 zram_generator::config[1076]: No configuration found. Apr 13 23:34:32.895580 systemd[1]: Populated /etc with preset unit settings. Apr 13 23:34:32.895595 systemd[1]: Queued start job for default target multi-user.target. Apr 13 23:34:32.895609 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 23:34:32.895627 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 23:34:32.895643 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 23:34:32.895657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 23:34:32.895678 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 23:34:32.895692 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 23:34:32.895705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 23:34:32.895720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 23:34:32.895735 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 23:34:32.895749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:34:32.895765 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:34:32.895780 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 23:34:32.895795 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 23:34:32.895809 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 23:34:32.895823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:34:32.895837 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 23:34:32.895850 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:34:32.895864 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 23:34:32.895878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:34:32.895894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:34:32.895908 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:34:32.895922 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:34:32.895934 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 23:34:32.895946 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 23:34:32.895959 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:34:32.895973 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:34:32.895985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:34:32.895999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:34:32.896013 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:34:32.896782 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 23:34:32.896856 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 23:34:32.896868 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 23:34:32.896883 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 23:34:32.896896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:32.896908 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 23:34:32.896920 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 23:34:32.896938 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 23:34:32.896950 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 23:34:32.896961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:34:32.896973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:34:32.896985 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 23:34:32.896997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:34:32.897008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:34:32.897019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:34:32.897070 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 23:34:32.897083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:34:32.897101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 23:34:32.897113 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 23:34:32.897130 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 23:34:32.897141 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:34:32.897152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:34:32.897164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 23:34:32.897176 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 23:34:32.897190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:34:32.897202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:32.897294 systemd-journald[1168]: Collecting audit messages is disabled. Apr 13 23:34:32.897324 systemd-journald[1168]: Journal started Apr 13 23:34:32.897352 systemd-journald[1168]: Runtime Journal (/run/log/journal/adb837d1ccf94f8b87a3d9e02b4c3a5b) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:34:32.913950 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 23:34:32.935731 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:34:32.964447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 23:34:32.979985 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 23:34:33.036005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 23:34:33.084208 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 23:34:33.126150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 23:34:33.192365 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 23:34:33.206954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:34:33.232126 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 23:34:33.232538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 23:34:33.266585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:34:33.266830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:34:33.290771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:34:33.290984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:34:33.317890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:34:33.342246 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 23:34:33.362861 kernel: fuse: init (API version 7.39) Apr 13 23:34:33.358065 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 23:34:33.469765 kernel: loop: module loaded Apr 13 23:34:33.468987 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 23:34:33.469539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 23:34:33.513852 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:34:33.516833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:34:33.548481 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 23:34:33.588676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 23:34:33.624977 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 23:34:33.646153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 23:34:33.703172 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 23:34:33.730491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 23:34:33.751308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:34:33.766889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 23:34:33.779345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:34:33.797826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:34:33.911913 systemd-journald[1168]: Time spent on flushing to /var/log/journal/adb837d1ccf94f8b87a3d9e02b4c3a5b is 144.950ms for 980 entries. Apr 13 23:34:33.911913 systemd-journald[1168]: System Journal (/var/log/journal/adb837d1ccf94f8b87a3d9e02b4c3a5b) is 8.0M, max 195.6M, 187.6M free. Apr 13 23:34:34.457827 systemd-journald[1168]: Received client request to flush runtime journal. Apr 13 23:34:34.457938 kernel: ACPI: bus type drm_connector registered Apr 13 23:34:33.931799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:34:34.009213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:34:34.026669 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 23:34:34.044666 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 23:34:34.088935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 23:34:34.107755 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:34:34.108004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:34:34.404120 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 23:34:34.475223 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 23:34:34.514830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 23:34:34.586739 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 23:34:34.590115 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 23:34:34.595875 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 23:34:34.607873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:34:34.687472 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:34:34.745993 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 23:34:35.504169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 23:34:35.627007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:34:35.861846 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 13 23:34:35.867462 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 13 23:34:35.948751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:34:39.458595 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 23:34:39.507774 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:34:39.674910 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Apr 13 23:34:39.896619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:34:39.939922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:34:40.001165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 23:34:40.260490 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 23:34:40.420201 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1262) Apr 13 23:34:40.586599 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 23:34:40.898888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:34:40.928208 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 23:34:40.946806 kernel: ACPI: button: Power Button [PWRF] Apr 13 23:34:40.978630 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 23:34:40.989997 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 23:34:41.026750 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 23:34:41.046722 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 23:34:41.046890 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 23:34:41.076613 systemd-networkd[1246]: lo: Link UP Apr 13 23:34:41.076669 systemd-networkd[1246]: lo: Gained carrier Apr 13 23:34:41.101165 systemd-networkd[1246]: Enumeration completed Apr 13 23:34:41.105183 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:34:41.133731 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:34:41.187022 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:34:41.195868 systemd-networkd[1246]: eth0: Link UP Apr 13 23:34:41.195885 systemd-networkd[1246]: eth0: Gained carrier Apr 13 23:34:41.195914 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:34:41.243301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 23:34:41.245712 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:34:41.471909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:34:41.519369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:34:41.587947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:41.752053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:34:42.238316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:34:42.438980 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 23:34:42.698646 systemd-networkd[1246]: eth0: Gained IPv6LL Apr 13 23:34:42.719030 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 23:34:43.438716 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 23:34:43.520840 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 23:34:43.633744 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:34:43.722991 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 23:34:43.747493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:34:43.782426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 23:34:43.925879 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:34:44.187356 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 23:34:44.194257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:34:44.209159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 23:34:44.209227 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:34:44.215438 systemd[1]: Reached target machines.target - Containers. Apr 13 23:34:44.250845 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 23:34:44.288315 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 23:34:44.320664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 23:34:44.345608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:34:44.387492 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 23:34:44.408106 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 23:34:44.449534 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 23:34:44.485565 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 23:34:44.514908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 23:34:44.625307 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 23:34:44.722025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 23:34:44.723007 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 23:34:44.948486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 23:34:45.102110 kernel: loop1: detected capacity change from 0 to 228704 Apr 13 23:34:45.642756 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 23:34:46.067624 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 23:34:46.407555 kernel: loop4: detected capacity change from 0 to 228704 Apr 13 23:34:46.541812 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 23:34:46.851936 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 23:34:46.859305 (sd-merge)[1316]: Merged extensions into '/usr'. Apr 13 23:34:47.134721 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 23:34:47.135042 systemd[1]: Reloading... Apr 13 23:34:48.125648 zram_generator::config[1339]: No configuration found. Apr 13 23:34:49.459459 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 23:34:50.232823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:34:51.509243 systemd[1]: Reloading finished in 4370 ms. Apr 13 23:34:51.662464 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 23:34:51.690074 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 23:34:51.877181 systemd[1]: Starting ensure-sysext.service... Apr 13 23:34:51.915507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:34:52.021636 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Apr 13 23:34:52.021687 systemd[1]: Reloading... Apr 13 23:34:52.152895 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:34:52.167288 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:34:52.186918 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:34:52.205006 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 13 23:34:52.210057 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 13 23:34:52.226771 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:34:52.226837 systemd-tmpfiles[1388]: Skipping /boot Apr 13 23:34:52.381731 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:34:52.381769 systemd-tmpfiles[1388]: Skipping /boot Apr 13 23:34:52.496454 zram_generator::config[1419]: No configuration found. Apr 13 23:34:54.371919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:34:55.130857 systemd[1]: Reloading finished in 3108 ms. Apr 13 23:34:55.471794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:34:55.556026 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:34:55.610327 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 23:34:55.630985 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 23:34:55.677075 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:34:55.692718 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 23:34:55.707346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:55.708614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:34:55.715007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:34:55.726217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:34:55.759624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:34:55.765452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:34:55.765699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:55.791767 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 23:34:55.810883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:34:55.813355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:34:55.821069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:34:55.829099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:34:55.882969 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:34:55.883287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:34:55.901183 augenrules[1490]: No rules Apr 13 23:34:55.903632 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:34:55.912066 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 23:34:55.975079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:34:55.977004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:34:56.004978 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 23:34:56.014069 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:34:56.019945 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 23:34:56.062342 systemd-resolved[1469]: Positive Trust Anchors: Apr 13 23:34:56.062928 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:34:56.062969 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:34:56.082070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:56.086211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:34:56.103934 systemd-resolved[1469]: Defaulting to hostname 'linux'. Apr 13 23:34:56.111310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:34:56.164171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:34:56.175087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:34:56.180654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:34:56.181015 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:34:56.181218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:56.189055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:34:56.197113 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 23:34:56.206875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:34:56.208190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:34:56.248950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:34:56.251214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:34:56.291298 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:34:56.291671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:34:56.400234 systemd[1]: Reached target network.target - Network. Apr 13 23:34:56.407299 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 23:34:56.416733 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:34:56.434000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:56.476770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:34:56.512965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:34:56.608224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:34:56.626361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:34:56.657037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:34:56.665059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:34:56.666038 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:34:56.667326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:34:56.692596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:34:56.692955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:34:56.786451 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:34:56.786712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:34:56.796922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:34:56.799962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:34:56.848466 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:34:56.850001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:34:56.873026 systemd[1]: Finished ensure-sysext.service. Apr 13 23:34:56.999029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:34:57.007011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:34:57.102863 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 23:34:57.940295 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 23:34:58.544637 systemd-resolved[1469]: Clock change detected. Flushing caches. Apr 13 23:34:58.544725 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 23:34:58.544808 systemd-timesyncd[1533]: Initial clock synchronization to Mon 2026-04-13 23:34:58.544377 UTC. Apr 13 23:34:58.555822 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:34:58.562570 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 23:34:58.570057 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 23:34:58.577196 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 23:34:58.584272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 23:34:58.584393 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:34:58.598072 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 23:34:58.617465 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 23:34:58.637795 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 23:34:58.651711 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:34:58.676753 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 23:34:58.698987 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 23:34:58.710646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 23:34:58.718070 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 23:34:58.722416 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:34:58.726208 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:34:58.732932 systemd[1]: System is tainted: cgroupsv1 Apr 13 23:34:58.733250 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:34:58.733338 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:34:58.781568 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 23:34:58.795721 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:34:58.820651 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 23:34:58.829076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 23:34:58.850681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 23:34:58.858404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 23:34:58.863179 jq[1542]: false Apr 13 23:34:58.864290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:34:58.876740 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 23:34:58.878656 dbus-daemon[1540]: [system] SELinux support is enabled Apr 13 23:34:58.902682 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 23:34:58.906267 extend-filesystems[1543]: Found loop3 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found loop4 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found loop5 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found sr0 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda1 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda2 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda3 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found usr Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda4 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda6 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda7 Apr 13 23:34:58.906267 extend-filesystems[1543]: Found vda9 Apr 13 23:34:58.906267 extend-filesystems[1543]: Checking size of /dev/vda9 Apr 13 23:34:59.016933 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 23:34:59.017219 extend-filesystems[1543]: Resized partition /dev/vda9 Apr 13 23:34:58.981010 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 23:34:59.024015 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Apr 13 23:34:59.033389 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 23:34:59.056634 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 23:34:59.079719 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 23:34:59.089379 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 23:34:59.118088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 23:34:59.120454 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 23:34:59.120454 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 23:34:59.120454 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 23:34:59.212724 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Apr 13 23:34:59.205772 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 23:34:59.233380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 23:34:59.249760 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 23:34:59.260000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1575) Apr 13 23:34:59.271055 jq[1587]: true Apr 13 23:34:59.278748 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 23:34:59.284529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 23:34:59.287799 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 23:34:59.288097 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 23:34:59.330449 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 23:34:59.340941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 23:34:59.351318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 23:34:59.365652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 23:34:59.366730 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 23:34:59.515258 update_engine[1585]: I20260413 23:34:59.510579 1585 main.cc:92] Flatcar Update Engine starting Apr 13 23:34:59.526430 jq[1597]: true Apr 13 23:34:59.533951 update_engine[1585]: I20260413 23:34:59.533797 1585 update_check_scheduler.cc:74] Next update check in 7m2s Apr 13 23:34:59.540494 (ntainerd)[1599]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 23:34:59.562082 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:34:59.567562 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:34:59.759499 systemd-logind[1573]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 23:34:59.759519 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 23:34:59.764769 systemd-logind[1573]: New seat seat0. Apr 13 23:34:59.881735 tar[1593]: linux-amd64/LICENSE Apr 13 23:34:59.900603 tar[1593]: linux-amd64/helm Apr 13 23:34:59.966858 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 23:35:00.037604 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Apr 13 23:35:00.045841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 23:35:00.211830 systemd[1]: Started update-engine.service - Update Engine. Apr 13 23:35:00.286648 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 23:35:00.303072 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 23:35:00.307755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 23:35:00.310453 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 23:35:00.321983 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 23:35:00.325743 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 23:35:00.403027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 23:35:00.412098 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 23:35:00.423458 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 23:35:00.736633 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 23:35:01.298481 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 23:35:01.416213 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 23:35:01.425859 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 23:35:01.826445 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 23:35:02.124478 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 23:35:02.215852 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 23:35:02.289421 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 23:35:02.471593 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 23:35:02.513411 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 23:35:03.272520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 23:35:03.314073 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:36744.service - OpenSSH per-connection server daemon (10.0.0.1:36744). Apr 13 23:35:04.358252 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 36744 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:04.378049 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:04.774508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 23:35:05.061296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 23:35:05.262434 systemd-logind[1573]: New session 1 of user core. Apr 13 23:35:05.529433 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 23:35:05.643459 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 23:35:05.791995 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 23:35:06.302540 containerd[1599]: time="2026-04-13T23:35:06.294514538Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 23:35:07.089707 systemd[1676]: Queued start job for default target default.target. Apr 13 23:35:07.090344 systemd[1676]: Created slice app.slice - User Application Slice. Apr 13 23:35:07.090367 systemd[1676]: Reached target paths.target - Paths. Apr 13 23:35:07.090380 systemd[1676]: Reached target timers.target - Timers. Apr 13 23:35:07.111515 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 23:35:07.188807 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 23:35:07.189779 systemd[1676]: Reached target sockets.target - Sockets. Apr 13 23:35:07.189838 systemd[1676]: Reached target basic.target - Basic System. Apr 13 23:35:07.190867 systemd[1676]: Reached target default.target - Main User Target. Apr 13 23:35:07.191843 systemd[1676]: Startup finished in 1.350s. Apr 13 23:35:07.195580 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 23:35:07.421437 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 23:35:07.934249 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:42582.service - OpenSSH per-connection server daemon (10.0.0.1:42582). Apr 13 23:35:08.209097 containerd[1599]: time="2026-04-13T23:35:08.137160384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.412413 containerd[1599]: time="2026-04-13T23:35:08.392884649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:35:08.420891 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 42582 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:08.422016 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:08.429796 containerd[1599]: time="2026-04-13T23:35:08.417695543Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 23:35:08.455727 containerd[1599]: time="2026-04-13T23:35:08.451848220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 23:35:08.465563 containerd[1599]: time="2026-04-13T23:35:08.464858812Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 23:35:08.467661 containerd[1599]: time="2026-04-13T23:35:08.465881640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.475436 containerd[1599]: time="2026-04-13T23:35:08.470430178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:35:08.475436 containerd[1599]: time="2026-04-13T23:35:08.471096301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.536428 containerd[1599]: time="2026-04-13T23:35:08.528795361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:35:08.536428 containerd[1599]: time="2026-04-13T23:35:08.535804550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.546490 containerd[1599]: time="2026-04-13T23:35:08.541080748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:35:08.548399 containerd[1599]: time="2026-04-13T23:35:08.546945881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.564461 containerd[1599]: time="2026-04-13T23:35:08.564173937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:08.704881 containerd[1599]: time="2026-04-13T23:35:08.671568810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:35:09.022488 containerd[1599]: time="2026-04-13T23:35:09.016722965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:35:09.029585 containerd[1599]: time="2026-04-13T23:35:09.024428665Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 23:35:09.055625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:35:09.067017 containerd[1599]: time="2026-04-13T23:35:09.059881853Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 23:35:09.067017 containerd[1599]: time="2026-04-13T23:35:09.062072232Z" level=info msg="metadata content store policy set" policy=shared Apr 13 23:35:09.089223 containerd[1599]: time="2026-04-13T23:35:09.086747628Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 23:35:09.089594 containerd[1599]: time="2026-04-13T23:35:09.089380577Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 23:35:09.089682 containerd[1599]: time="2026-04-13T23:35:09.089632502Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 23:35:09.089722 containerd[1599]: time="2026-04-13T23:35:09.089698127Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 23:35:09.092491 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:35:09.098978 containerd[1599]: time="2026-04-13T23:35:09.092038014Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 23:35:09.105882 containerd[1599]: time="2026-04-13T23:35:09.101685934Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 23:35:09.160852 systemd-logind[1573]: New session 2 of user core. Apr 13 23:35:09.203011 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 23:35:09.314889 containerd[1599]: time="2026-04-13T23:35:09.296028980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 23:35:09.343734 containerd[1599]: time="2026-04-13T23:35:09.339044480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 23:35:09.349342 containerd[1599]: time="2026-04-13T23:35:09.348730087Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 23:35:09.352825 containerd[1599]: time="2026-04-13T23:35:09.352592185Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 23:35:09.357256 containerd[1599]: time="2026-04-13T23:35:09.353820465Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.360473 containerd[1599]: time="2026-04-13T23:35:09.357351873Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.364072 containerd[1599]: time="2026-04-13T23:35:09.363588741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.368796 containerd[1599]: time="2026-04-13T23:35:09.365773051Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.368796 containerd[1599]: time="2026-04-13T23:35:09.366812453Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.377347 containerd[1599]: time="2026-04-13T23:35:09.370813377Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.392750 containerd[1599]: time="2026-04-13T23:35:09.387797739Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.475418 containerd[1599]: time="2026-04-13T23:35:09.408630699Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 23:35:09.519451 containerd[1599]: time="2026-04-13T23:35:09.504416115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.532200 containerd[1599]: time="2026-04-13T23:35:09.531346919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.546972 containerd[1599]: time="2026-04-13T23:35:09.535845545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.612857 containerd[1599]: time="2026-04-13T23:35:09.601763290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.690433 containerd[1599]: time="2026-04-13T23:35:09.685625365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.690433 containerd[1599]: time="2026-04-13T23:35:09.691267278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.696056869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.699425333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.702845485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.704945475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.705049436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.705217503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.705253815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.705380282Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.707000925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.710849 containerd[1599]: time="2026-04-13T23:35:09.710502792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.712814862Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.715999678Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.716757099Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.718485158Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.720007928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.720082842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.721415718Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.727426057Z" level=info msg="NRI interface is disabled by configuration." Apr 13 23:35:09.736898 containerd[1599]: time="2026-04-13T23:35:09.735787644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 23:35:09.772898 containerd[1599]: time="2026-04-13T23:35:09.763708251Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 23:35:09.796076 containerd[1599]: time="2026-04-13T23:35:09.791193917Z" level=info msg="Connect containerd service" Apr 13 23:35:09.826466 containerd[1599]: time="2026-04-13T23:35:09.823184333Z" level=info msg="using legacy CRI server" Apr 13 23:35:09.826466 containerd[1599]: time="2026-04-13T23:35:09.823819978Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 23:35:09.939583 sshd[1691]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:09.963536 containerd[1599]: time="2026-04-13T23:35:09.962035570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 23:35:10.113324 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:42594.service - OpenSSH per-connection server daemon (10.0.0.1:42594). Apr 13 23:35:10.157592 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:42582.service: Deactivated successfully. Apr 13 23:35:10.173085 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 23:35:10.207549 containerd[1599]: time="2026-04-13T23:35:10.201402737Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 23:35:10.271312 containerd[1599]: time="2026-04-13T23:35:10.216802910Z" level=info msg="Start subscribing containerd event" Apr 13 23:35:10.286315 containerd[1599]: time="2026-04-13T23:35:10.284552567Z" level=info msg="Start recovering state" Apr 13 23:35:10.319083 containerd[1599]: time="2026-04-13T23:35:10.318637049Z" level=info msg="Start event monitor" Apr 13 23:35:10.326080 containerd[1599]: time="2026-04-13T23:35:10.323806695Z" level=info msg="Start snapshots syncer" Apr 13 23:35:10.336866 containerd[1599]: time="2026-04-13T23:35:10.331514278Z" level=info msg="Start cni network conf syncer for default" Apr 13 23:35:10.429300 containerd[1599]: time="2026-04-13T23:35:10.394526661Z" level=info msg="Start streaming server" Apr 13 23:35:10.573764 containerd[1599]: time="2026-04-13T23:35:10.493041540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 23:35:10.583809 containerd[1599]: time="2026-04-13T23:35:10.572791738Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 23:35:10.604743 tar[1593]: linux-amd64/README.md Apr 13 23:35:10.715758 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 23:35:10.820284 containerd[1599]: time="2026-04-13T23:35:10.624392638Z" level=info msg="containerd successfully booted in 4.371193s" Apr 13 23:35:10.817403 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Apr 13 23:35:10.918536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 23:35:11.010304 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 23:35:11.016640 systemd[1]: Startup finished in 23.281s (kernel) + 40.852s (userspace) = 1min 4.134s. Apr 13 23:35:11.079641 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 42594 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:11.086482 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:11.225858 systemd-logind[1573]: Removed session 2. Apr 13 23:35:11.600086 systemd-logind[1573]: New session 3 of user core. Apr 13 23:35:11.707759 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 23:35:12.377655 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:12.589841 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:42594.service: Deactivated successfully. Apr 13 23:35:12.673882 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 23:35:12.797873 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Apr 13 23:35:12.993555 systemd-logind[1573]: Removed session 3. Apr 13 23:35:19.210985 kubelet[1703]: E0413 23:35:19.209454 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:35:19.245879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:35:19.246606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:35:22.684682 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:54064.service - OpenSSH per-connection server daemon (10.0.0.1:54064). Apr 13 23:35:23.625938 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 54064 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:23.699801 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:24.268906 systemd-logind[1573]: New session 4 of user core. Apr 13 23:35:24.321823 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 23:35:25.040982 sshd[1735]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:25.144556 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:54074.service - OpenSSH per-connection server daemon (10.0.0.1:54074). Apr 13 23:35:25.167828 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:54064.service: Deactivated successfully. Apr 13 23:35:25.256817 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 23:35:25.294619 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Apr 13 23:35:25.420935 systemd-logind[1573]: Removed session 4. Apr 13 23:35:25.666962 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 54074 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:25.690965 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:25.920171 systemd-logind[1573]: New session 5 of user core. Apr 13 23:35:25.998685 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 23:35:26.328674 sshd[1740]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:26.586819 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Apr 13 23:35:26.597672 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:54074.service: Deactivated successfully. Apr 13 23:35:26.614837 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 23:35:26.665804 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Apr 13 23:35:26.728403 systemd-logind[1573]: Removed session 5. Apr 13 23:35:27.754506 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:27.795695 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:28.107809 systemd-logind[1573]: New session 6 of user core. Apr 13 23:35:28.183890 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 23:35:28.829773 sshd[1749]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:28.907362 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:47956.service: Deactivated successfully. Apr 13 23:35:28.991880 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Apr 13 23:35:29.035851 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:47960.service - OpenSSH per-connection server daemon (10.0.0.1:47960). Apr 13 23:35:29.037526 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 23:35:29.095880 systemd-logind[1573]: Removed session 6. Apr 13 23:35:29.335790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 23:35:29.417615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:35:29.611635 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 47960 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:29.636500 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:30.160621 systemd-logind[1573]: New session 7 of user core. Apr 13 23:35:30.195382 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 23:35:30.961216 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 23:35:30.964414 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:35:31.226345 sudo[1767]: pam_unix(sudo:session): session closed for user root Apr 13 23:35:31.243364 sshd[1759]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:31.337974 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Apr 13 23:35:31.409803 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:47960.service: Deactivated successfully. Apr 13 23:35:31.551225 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 23:35:31.727506 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Apr 13 23:35:31.813875 systemd-logind[1573]: Removed session 7. Apr 13 23:35:31.885738 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:31.895591 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:32.726863 systemd-logind[1573]: New session 8 of user core. Apr 13 23:35:33.100498 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 23:35:33.235282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:35:33.304242 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:35:33.824860 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 23:35:33.827978 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:35:34.133965 sudo[1789]: pam_unix(sudo:session): session closed for user root Apr 13 23:35:34.313897 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 23:35:34.321724 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:35:35.339869 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 23:35:35.357440 auditctl[1795]: No rules Apr 13 23:35:35.394613 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 23:35:35.395955 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 23:35:35.511667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:35:35.953854 augenrules[1814]: No rules Apr 13 23:35:35.961700 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:35:36.029896 sudo[1785]: pam_unix(sudo:session): session closed for user root Apr 13 23:35:36.127661 sshd[1769]: pam_unix(sshd:session): session closed for user core Apr 13 23:35:36.404054 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:34434.service - OpenSSH per-connection server daemon (10.0.0.1:34434). Apr 13 23:35:36.500949 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:47972.service: Deactivated successfully. Apr 13 23:35:36.562889 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 23:35:36.704904 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Apr 13 23:35:36.882865 systemd-logind[1573]: Removed session 8. Apr 13 23:35:37.107223 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 34434 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:35:37.207691 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:35:37.873958 systemd-logind[1573]: New session 9 of user core. Apr 13 23:35:38.061418 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 23:35:38.627909 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 23:35:38.636683 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:35:44.935480 update_engine[1585]: I20260413 23:35:44.929796 1585 update_attempter.cc:509] Updating boot flags... Apr 13 23:35:45.802799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1853) Apr 13 23:35:47.068451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1853) Apr 13 23:35:47.240562 kubelet[1782]: E0413 23:35:47.232853 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:35:47.262711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:35:47.275573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:35:50.360846 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 23:35:50.389896 (dockerd)[1864]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 23:35:53.819195 dockerd[1864]: time="2026-04-13T23:35:53.817804879Z" level=info msg="Starting up" Apr 13 23:35:56.664819 dockerd[1864]: time="2026-04-13T23:35:56.663397576Z" level=info msg="Loading containers: start." Apr 13 23:35:57.607776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 23:35:57.863715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:35:59.424517 kernel: Initializing XFRM netlink socket Apr 13 23:36:00.024074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:36:00.079623 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:36:01.123522 systemd-networkd[1246]: docker0: Link UP Apr 13 23:36:01.745010 dockerd[1864]: time="2026-04-13T23:36:01.743392562Z" level=info msg="Loading containers: done." Apr 13 23:36:02.942846 dockerd[1864]: time="2026-04-13T23:36:02.936067834Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 23:36:02.974390 dockerd[1864]: time="2026-04-13T23:36:02.969814110Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 23:36:02.996015 dockerd[1864]: time="2026-04-13T23:36:02.987447245Z" level=info msg="Daemon has completed initialization" Apr 13 23:36:03.344699 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2120627198-merged.mount: Deactivated successfully. Apr 13 23:36:07.031793 dockerd[1864]: time="2026-04-13T23:36:07.025695265Z" level=info msg="API listen on /run/docker.sock" Apr 13 23:36:07.028460 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 23:36:08.015455 kubelet[1956]: E0413 23:36:08.009784 1956 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:36:08.035248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:36:08.095753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:36:18.194364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 23:36:18.378839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:36:20.109297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:36:20.162857 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:36:23.902787 kubelet[2042]: E0413 23:36:23.898007 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:36:23.918906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:36:23.924365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:36:32.534389 containerd[1599]: time="2026-04-13T23:36:32.530834530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 23:36:34.207021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 23:36:34.420779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:36:35.613005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:36:35.613942 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:36:37.186942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101644150.mount: Deactivated successfully. Apr 13 23:36:38.466722 kubelet[2067]: E0413 23:36:38.464933 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:36:38.477871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:36:38.488007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:36:48.637613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 23:36:48.827020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:36:50.093053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:36:50.106053 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:36:52.079041 kubelet[2124]: E0413 23:36:52.075931 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:36:52.103767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:36:52.108657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:37:01.800950 containerd[1599]: time="2026-04-13T23:37:01.799511339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:01.814474 containerd[1599]: time="2026-04-13T23:37:01.799827395Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 13 23:37:01.921749 containerd[1599]: time="2026-04-13T23:37:01.919030454Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:02.402588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 13 23:37:02.540056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:37:02.783604 containerd[1599]: time="2026-04-13T23:37:02.781238325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:03.295677 containerd[1599]: time="2026-04-13T23:37:03.295303506Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 30.758558497s" Apr 13 23:37:03.301079 containerd[1599]: time="2026-04-13T23:37:03.299556659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 23:37:03.438616 containerd[1599]: time="2026-04-13T23:37:03.437771154Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 23:37:04.226939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:37:04.230086 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:37:06.981398 kubelet[2168]: E0413 23:37:06.979865 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:37:07.014315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:37:07.014915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:37:11.995405 containerd[1599]: time="2026-04-13T23:37:11.991671184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:12.005494 containerd[1599]: time="2026-04-13T23:37:11.999932991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 13 23:37:12.210259 containerd[1599]: time="2026-04-13T23:37:12.209216141Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:12.618651 containerd[1599]: time="2026-04-13T23:37:12.617630154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:12.737474 containerd[1599]: time="2026-04-13T23:37:12.736764186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 9.280094574s" Apr 13 23:37:12.788665 containerd[1599]: time="2026-04-13T23:37:12.739910303Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 23:37:12.862699 containerd[1599]: time="2026-04-13T23:37:12.861309596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 23:37:17.218679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 13 23:37:17.436839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:37:19.287901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:37:19.389630 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:37:23.335189 kubelet[2198]: E0413 23:37:23.334788 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:37:23.425222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:37:23.428758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:37:26.801390 containerd[1599]: time="2026-04-13T23:37:26.798616199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:26.839375 containerd[1599]: time="2026-04-13T23:37:26.820011921Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 13 23:37:26.882321 containerd[1599]: time="2026-04-13T23:37:26.880374683Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:27.367446 containerd[1599]: time="2026-04-13T23:37:27.359580134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:27.408739 containerd[1599]: time="2026-04-13T23:37:27.406238966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 14.537075963s" Apr 13 23:37:27.408739 containerd[1599]: time="2026-04-13T23:37:27.406513239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 23:37:27.652392 containerd[1599]: time="2026-04-13T23:37:27.642948864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 23:37:33.806705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 13 23:37:33.921370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:37:38.386598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:37:38.425081 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:37:40.437371 kubelet[2225]: E0413 23:37:40.435971 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:37:40.450783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:37:40.451259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:37:49.985977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110033594.mount: Deactivated successfully. Apr 13 23:37:50.591676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 13 23:37:50.637381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:37:51.715669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:37:51.718502 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:37:52.091425 kubelet[2248]: E0413 23:37:52.090933 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:37:52.099397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:37:52.104448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:37:53.034056 containerd[1599]: time="2026-04-13T23:37:53.033668073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:53.038420 containerd[1599]: time="2026-04-13T23:37:53.035947885Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 13 23:37:53.049446 containerd[1599]: time="2026-04-13T23:37:53.047978819Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:53.143745 containerd[1599]: time="2026-04-13T23:37:53.143213806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:37:53.223427 containerd[1599]: time="2026-04-13T23:37:53.222885555Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 25.574761328s" Apr 13 23:37:53.223427 containerd[1599]: time="2026-04-13T23:37:53.223349610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 23:37:53.233377 containerd[1599]: time="2026-04-13T23:37:53.233142384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 23:37:56.249092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913129676.mount: Deactivated successfully. Apr 13 23:38:02.329992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 13 23:38:02.426453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:03.689703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:03.690677 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:38:05.205184 kubelet[2283]: E0413 23:38:05.204661 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:38:05.211992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:38:05.212384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:38:12.564636 containerd[1599]: time="2026-04-13T23:38:12.561463589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:12.575866 containerd[1599]: time="2026-04-13T23:38:12.562980371Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 13 23:38:12.606214 containerd[1599]: time="2026-04-13T23:38:12.605652522Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:12.930288 containerd[1599]: time="2026-04-13T23:38:12.928086503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:13.031320 containerd[1599]: time="2026-04-13T23:38:13.030258649Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 19.794530248s" Apr 13 23:38:13.034374 containerd[1599]: time="2026-04-13T23:38:13.033358664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 23:38:13.118578 containerd[1599]: time="2026-04-13T23:38:13.118158843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 23:38:14.772463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333920001.mount: Deactivated successfully. Apr 13 23:38:14.828038 containerd[1599]: time="2026-04-13T23:38:14.827509525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:14.834700 containerd[1599]: time="2026-04-13T23:38:14.832628632Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 13 23:38:14.867080 containerd[1599]: time="2026-04-13T23:38:14.866306331Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:14.876949 containerd[1599]: time="2026-04-13T23:38:14.876520148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:14.878242 containerd[1599]: time="2026-04-13T23:38:14.878093364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.75970286s" Apr 13 23:38:14.878408 containerd[1599]: time="2026-04-13T23:38:14.878265606Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 23:38:14.893515 containerd[1599]: time="2026-04-13T23:38:14.893131314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 23:38:15.309194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 13 23:38:15.332773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:15.713992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:15.714044 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:38:15.876001 kubelet[2347]: E0413 23:38:15.875504 2347 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:38:15.879504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:38:15.879827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:38:16.379378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691766771.mount: Deactivated successfully. Apr 13 23:38:26.094623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 13 23:38:26.151672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:26.972978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:26.978616 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:38:27.991627 kubelet[2413]: E0413 23:38:27.989790 2413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:38:28.002541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:38:28.002940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:38:34.076924 containerd[1599]: time="2026-04-13T23:38:34.075834725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:34.089931 containerd[1599]: time="2026-04-13T23:38:34.084815413Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 13 23:38:34.101075 containerd[1599]: time="2026-04-13T23:38:34.100466330Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:34.292945 containerd[1599]: time="2026-04-13T23:38:34.292545865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:38:34.411029 containerd[1599]: time="2026-04-13T23:38:34.408917758Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 19.515340839s" Apr 13 23:38:34.411029 containerd[1599]: time="2026-04-13T23:38:34.410360941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 23:38:38.116367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 13 23:38:38.154874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:39.280882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:39.295960 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:38:39.879365 kubelet[2476]: E0413 23:38:39.878901 2476 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:38:39.887959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:38:39.892610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:38:50.115950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 13 23:38:50.251754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:51.838834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:51.856569 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:38:53.137926 kubelet[2498]: E0413 23:38:53.136318 2498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:38:53.168953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:38:53.171991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:38:55.386785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:55.483660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:55.793642 systemd[1]: Reloading requested from client PID 2517 ('systemctl') (unit session-9.scope)... Apr 13 23:38:55.795072 systemd[1]: Reloading... Apr 13 23:38:56.682557 zram_generator::config[2554]: No configuration found. Apr 13 23:38:57.488197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:38:57.841888 systemd[1]: Reloading finished in 2043 ms. Apr 13 23:38:57.996061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:58.015782 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:38:58.074495 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:58.078046 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:38:58.078806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:58.105263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:38:59.307625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:38:59.396769 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:39:00.830624 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:39:00.830624 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:39:00.830624 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:39:00.832557 kubelet[2625]: I0413 23:39:00.830844 2625 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:39:03.617396 kubelet[2625]: I0413 23:39:03.613606 2625 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:39:03.620703 kubelet[2625]: I0413 23:39:03.619015 2625 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:39:03.642068 kubelet[2625]: I0413 23:39:03.641697 2625 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:39:03.996484 kubelet[2625]: E0413 23:39:03.992933 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:04.115360 kubelet[2625]: I0413 23:39:04.112880 2625 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:39:04.732234 kubelet[2625]: E0413 23:39:04.723496 2625 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:39:04.755652 kubelet[2625]: I0413 23:39:04.735191 2625 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:39:05.005280 kubelet[2625]: I0413 23:39:05.004288 2625 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:39:05.031618 kubelet[2625]: I0413 23:39:05.029382 2625 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:39:05.077558 kubelet[2625]: I0413 23:39:05.032555 2625 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 23:39:05.084139 kubelet[2625]: I0413 23:39:05.083639 2625 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:39:05.084489 kubelet[2625]: I0413 23:39:05.084443 2625 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:39:05.105571 kubelet[2625]: I0413 23:39:05.099720 2625 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:39:05.180159 kubelet[2625]: I0413 23:39:05.178947 2625 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:39:05.188467 kubelet[2625]: I0413 23:39:05.182184 2625 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:39:05.188467 kubelet[2625]: I0413 23:39:05.187980 2625 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:39:05.198453 kubelet[2625]: I0413 23:39:05.191187 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:39:05.211526 kubelet[2625]: E0413 23:39:05.209075 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:39:05.222932 kubelet[2625]: E0413 23:39:05.209090 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:39:05.297874 kubelet[2625]: I0413 23:39:05.284062 2625 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:39:05.354541 kubelet[2625]: I0413 23:39:05.332687 2625 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:39:05.369554 kubelet[2625]: W0413 23:39:05.366780 2625 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 23:39:05.471404 kubelet[2625]: I0413 23:39:05.470501 2625 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:39:05.487556 kubelet[2625]: I0413 23:39:05.485594 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:39:05.496894 kubelet[2625]: I0413 23:39:05.496015 2625 server.go:1289] "Started kubelet" Apr 13 23:39:05.525606 kubelet[2625]: I0413 23:39:05.523253 2625 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:39:05.556853 kubelet[2625]: E0413 23:39:05.523241 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:39:05.563598 kubelet[2625]: I0413 23:39:05.560511 2625 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:39:05.603323 kubelet[2625]: I0413 23:39:05.602979 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:39:05.625511 kubelet[2625]: I0413 23:39:05.615151 2625 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:39:05.636146 kubelet[2625]: I0413 23:39:05.633473 2625 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:39:05.636146 kubelet[2625]: I0413 23:39:05.633515 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:39:05.646206 kubelet[2625]: E0413 23:39:05.645840 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:05.649382 kubelet[2625]: I0413 23:39:05.649159 2625 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:39:05.652745 kubelet[2625]: I0413 23:39:05.650651 2625 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:39:05.664859 kubelet[2625]: E0413 23:39:05.663400 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Apr 13 23:39:05.669668 kubelet[2625]: E0413 23:39:05.669487 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:39:05.725324 kubelet[2625]: E0413 23:39:05.724977 2625 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:39:05.749522 kubelet[2625]: E0413 23:39:05.748816 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:05.781835 kubelet[2625]: I0413 23:39:05.771283 2625 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:39:05.781835 kubelet[2625]: I0413 23:39:05.781709 2625 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:39:05.781835 kubelet[2625]: I0413 23:39:05.782070 2625 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:39:05.800735 kubelet[2625]: I0413 23:39:05.793839 2625 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:39:05.800735 kubelet[2625]: I0413 23:39:05.794039 2625 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:39:05.800735 kubelet[2625]: I0413 23:39:05.797597 2625 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:39:05.800735 kubelet[2625]: I0413 23:39:05.800170 2625 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:39:05.800735 kubelet[2625]: I0413 23:39:05.800340 2625 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:39:05.800735 kubelet[2625]: E0413 23:39:05.800430 2625 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:39:05.881961 kubelet[2625]: E0413 23:39:05.864435 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:05.913597 kubelet[2625]: E0413 23:39:05.909614 2625 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:39:05.926357 kubelet[2625]: E0413 23:39:05.924229 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:39:05.930959 kubelet[2625]: E0413 23:39:05.930638 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Apr 13 23:39:05.996648 kubelet[2625]: E0413 23:39:05.995610 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.098993 kubelet[2625]: E0413 23:39:06.096959 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.120435 kubelet[2625]: E0413 23:39:06.120153 2625 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:39:06.121699 kubelet[2625]: E0413 23:39:06.121625 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:06.204156 kubelet[2625]: E0413 23:39:06.200653 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.287543 kubelet[2625]: E0413 23:39:06.286358 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:39:06.312374 kubelet[2625]: E0413 23:39:06.310999 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.415633 kubelet[2625]: E0413 23:39:06.414883 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.435271 kubelet[2625]: E0413 23:39:06.433695 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Apr 13 23:39:06.477438 kubelet[2625]: I0413 23:39:06.476253 2625 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:39:06.478736 kubelet[2625]: I0413 23:39:06.478530 2625 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:39:06.478809 kubelet[2625]: I0413 23:39:06.478765 2625 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:39:06.501634 kubelet[2625]: I0413 23:39:06.501043 2625 policy_none.go:49] "None policy: Start" Apr 13 23:39:06.513017 kubelet[2625]: I0413 23:39:06.503030 2625 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:39:06.513017 kubelet[2625]: I0413 23:39:06.508886 2625 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:39:06.518887 kubelet[2625]: E0413 23:39:06.518511 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.525691 kubelet[2625]: E0413 23:39:06.525322 2625 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:39:06.620516 kubelet[2625]: E0413 23:39:06.620024 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:39:06.634402 kubelet[2625]: E0413 23:39:06.623541 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:39:06.677603 kubelet[2625]: E0413 23:39:06.677146 2625 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:39:06.684470 kubelet[2625]: I0413 23:39:06.684000 2625 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:39:06.690838 kubelet[2625]: I0413 23:39:06.688552 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:39:06.695596 kubelet[2625]: I0413 23:39:06.693885 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:39:06.887659 kubelet[2625]: E0413 23:39:06.887395 2625 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:39:06.893407 kubelet[2625]: E0413 23:39:06.892199 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:39:06.921838 kubelet[2625]: E0413 23:39:06.920576 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:39:06.933799 kubelet[2625]: I0413 23:39:06.921881 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:07.016143 kubelet[2625]: E0413 23:39:07.013695 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:07.276933 kubelet[2625]: E0413 23:39:07.272932 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Apr 13 23:39:07.281807 kubelet[2625]: E0413 23:39:07.276577 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:39:07.318080 kubelet[2625]: I0413 23:39:07.317690 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:07.328754 kubelet[2625]: E0413 23:39:07.328096 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:07.520518 kubelet[2625]: I0413 23:39:07.519276 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:39:07.536007 kubelet[2625]: I0413 23:39:07.524638 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:39:07.536007 kubelet[2625]: I0413 23:39:07.529888 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:39:07.652287 kubelet[2625]: I0413 23:39:07.651873 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:39:07.674425 kubelet[2625]: I0413 23:39:07.671084 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:39:07.676889 kubelet[2625]: I0413 23:39:07.676495 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:39:07.676889 kubelet[2625]: I0413 23:39:07.676729 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:39:07.676945 kubelet[2625]: I0413 23:39:07.676909 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:39:07.830940 kubelet[2625]: E0413 23:39:07.742485 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:07.830940 kubelet[2625]: E0413 23:39:07.819861 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:07.841161 kubelet[2625]: I0413 23:39:07.840597 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:07.845315 kubelet[2625]: E0413 23:39:07.843773 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:07.861530 containerd[1599]: time="2026-04-13T23:39:07.859754351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:867491ae9245c87a1735f75bc55f305c,Namespace:kube-system,Attempt:0,}" Apr 13 23:39:07.935277 kubelet[2625]: E0413 23:39:07.928722 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:08.013717 kubelet[2625]: I0413 23:39:07.996634 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:39:08.020032 kubelet[2625]: E0413 23:39:08.019873 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:08.039637 kubelet[2625]: E0413 23:39:08.037712 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:08.072468 containerd[1599]: time="2026-04-13T23:39:08.069923747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 13 23:39:08.393712 kubelet[2625]: E0413 23:39:08.393158 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:08.419343 kubelet[2625]: E0413 23:39:08.418098 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:39:08.476439 containerd[1599]: time="2026-04-13T23:39:08.438481834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 13 23:39:08.902479 kubelet[2625]: I0413 23:39:08.900715 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:08.909172 kubelet[2625]: E0413 23:39:08.908535 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="3.2s" Apr 13 23:39:09.002629 kubelet[2625]: E0413 23:39:09.001721 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:09.081435 kubelet[2625]: E0413 23:39:09.079138 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:39:09.452570 kubelet[2625]: E0413 23:39:09.450679 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:39:09.654533 kubelet[2625]: E0413 23:39:09.653005 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:39:10.091150 kubelet[2625]: E0413 23:39:10.090624 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:39:10.287353 kubelet[2625]: E0413 23:39:10.284080 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:10.834697 kubelet[2625]: I0413 23:39:10.834548 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:10.838166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67546644.mount: Deactivated successfully. Apr 13 23:39:10.841148 kubelet[2625]: E0413 23:39:10.840977 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:10.868662 containerd[1599]: time="2026-04-13T23:39:10.868489211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:39:10.885483 containerd[1599]: time="2026-04-13T23:39:10.883646011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 13 23:39:10.987704 containerd[1599]: time="2026-04-13T23:39:10.973996486Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:39:11.032053 containerd[1599]: time="2026-04-13T23:39:11.031633448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:39:11.051797 containerd[1599]: time="2026-04-13T23:39:11.047782645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:39:11.125267 containerd[1599]: time="2026-04-13T23:39:11.123506049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:39:11.532783 containerd[1599]: time="2026-04-13T23:39:11.530695230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:39:11.836749 containerd[1599]: time="2026-04-13T23:39:11.833973569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.965489985s" Apr 13 23:39:11.881526 containerd[1599]: time="2026-04-13T23:39:11.880652598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:39:11.894022 containerd[1599]: time="2026-04-13T23:39:11.886602877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.810581951s" Apr 13 23:39:11.917402 containerd[1599]: time="2026-04-13T23:39:11.917093594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.438833431s" Apr 13 23:39:12.292952 kubelet[2625]: E0413 23:39:12.289892 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="6.4s" Apr 13 23:39:12.627765 containerd[1599]: time="2026-04-13T23:39:12.625856939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:39:12.627765 containerd[1599]: time="2026-04-13T23:39:12.625901537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:39:12.627765 containerd[1599]: time="2026-04-13T23:39:12.625914347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:12.627765 containerd[1599]: time="2026-04-13T23:39:12.625984988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:12.630358 containerd[1599]: time="2026-04-13T23:39:12.625569546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:39:12.630358 containerd[1599]: time="2026-04-13T23:39:12.625661373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:39:12.630358 containerd[1599]: time="2026-04-13T23:39:12.625673816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:12.630358 containerd[1599]: time="2026-04-13T23:39:12.626026226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:12.783510 containerd[1599]: time="2026-04-13T23:39:12.779591027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:39:12.827181 containerd[1599]: time="2026-04-13T23:39:12.820278454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:39:12.827181 containerd[1599]: time="2026-04-13T23:39:12.820337526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:12.827181 containerd[1599]: time="2026-04-13T23:39:12.820594297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:39:14.000544 containerd[1599]: time="2026-04-13T23:39:13.998271258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\"" Apr 13 23:39:14.086837 kubelet[2625]: E0413 23:39:14.086402 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:39:14.119574 kubelet[2625]: E0413 23:39:14.119184 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:14.127083 kubelet[2625]: I0413 23:39:14.125760 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:14.201795 containerd[1599]: time="2026-04-13T23:39:14.201178568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:867491ae9245c87a1735f75bc55f305c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e842a7a584daabd68832cb8b8a99850df26f1e4c92ec2b72cdea84d4dcc04dd2\"" Apr 13 23:39:14.204269 kubelet[2625]: E0413 23:39:14.204001 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 13 23:39:14.205377 containerd[1599]: time="2026-04-13T23:39:14.205332458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\"" Apr 13 23:39:14.217592 kubelet[2625]: E0413 23:39:14.215497 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:14.278805 kubelet[2625]: E0413 23:39:14.277091 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:14.475627 containerd[1599]: time="2026-04-13T23:39:14.471958740Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 23:39:14.489730 containerd[1599]: time="2026-04-13T23:39:14.489272949Z" level=info msg="CreateContainer within sandbox \"e842a7a584daabd68832cb8b8a99850df26f1e4c92ec2b72cdea84d4dcc04dd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 23:39:14.888174 kubelet[2625]: E0413 23:39:14.887887 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:39:14.899864 containerd[1599]: time="2026-04-13T23:39:14.888999811Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 23:39:15.085685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067319127.mount: Deactivated successfully. Apr 13 23:39:15.224066 kubelet[2625]: E0413 23:39:15.218898 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:39:15.452526 kubelet[2625]: E0413 23:39:15.432853 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:39:15.483426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197527053.mount: Deactivated successfully. Apr 13 23:39:15.488541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090540123.mount: Deactivated successfully. Apr 13 23:39:15.700573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97665436.mount: Deactivated successfully. Apr 13 23:39:15.920155 containerd[1599]: time="2026-04-13T23:39:15.919506907Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\"" Apr 13 23:39:15.976009 containerd[1599]: time="2026-04-13T23:39:15.928954977Z" level=info msg="CreateContainer within sandbox \"e842a7a584daabd68832cb8b8a99850df26f1e4c92ec2b72cdea84d4dcc04dd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff8ab26f5c42142dbb0d05c72bb3a2cd0ed07202de97a8d28a4ed3bd778f1a07\"" Apr 13 23:39:16.179092 containerd[1599]: time="2026-04-13T23:39:16.137698666Z" level=info msg="StartContainer for \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\"" Apr 13 23:39:16.209507 containerd[1599]: time="2026-04-13T23:39:16.207821975Z" level=info msg="StartContainer for \"ff8ab26f5c42142dbb0d05c72bb3a2cd0ed07202de97a8d28a4ed3bd778f1a07\"" Apr 13 23:39:16.289021 containerd[1599]: time="2026-04-13T23:39:16.288081810Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\"" Apr 13 23:39:16.505296 containerd[1599]: time="2026-04-13T23:39:16.503984999Z" level=info msg="StartContainer for \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\"" Apr 13 23:39:17.092604 kubelet[2625]: E0413 23:39:17.091388 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:39:18.780533 containerd[1599]: time="2026-04-13T23:39:18.748049507Z" level=info msg="StartContainer for \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" returns successfully" Apr 13 23:39:19.008621 containerd[1599]: time="2026-04-13T23:39:18.980917972Z" level=info msg="StartContainer for \"ff8ab26f5c42142dbb0d05c72bb3a2cd0ed07202de97a8d28a4ed3bd778f1a07\" returns successfully" Apr 13 23:39:19.194589 kubelet[2625]: E0413 23:39:19.190996 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:19.562800 kubelet[2625]: E0413 23:39:19.378457 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="7s" Apr 13 23:39:20.192592 kubelet[2625]: E0413 23:39:20.192006 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:39:24.838705 kubelet[2625]: I0413 23:39:24.838353 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:25.208772 containerd[1599]: time="2026-04-13T23:39:25.005092027Z" level=info msg="StartContainer for \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\" returns successfully" Apr 13 23:39:27.493616 kubelet[2625]: E0413 23:39:27.485954 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:39:32.799707 kubelet[2625]: E0413 23:39:32.750666 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:32.897264 kubelet[2625]: E0413 23:39:32.804921 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:34.110767 kubelet[2625]: E0413 23:39:34.109486 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:34.187597 kubelet[2625]: E0413 23:39:34.179038 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:34.315715 kubelet[2625]: E0413 23:39:34.304452 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:39:34.954532 kubelet[2625]: E0413 23:39:34.951634 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:39:36.403018 kubelet[2625]: E0413 23:39:36.386836 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:39:37.062658 kubelet[2625]: E0413 23:39:37.057582 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:39:37.924647 kubelet[2625]: E0413 23:39:37.915649 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:39:38.268097 kubelet[2625]: E0413 23:39:38.256397 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:39:38.728503 kubelet[2625]: E0413 23:39:38.724173 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:39:39.158561 kubelet[2625]: E0413 23:39:39.153808 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:39.319294 kubelet[2625]: E0413 23:39:39.318152 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:40.504794 kubelet[2625]: E0413 23:39:40.483227 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:40.524681 kubelet[2625]: E0413 23:39:40.521190 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:40.711834 kubelet[2625]: E0413 23:39:40.710200 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:39:40.782931 kubelet[2625]: E0413 23:39:40.747647 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:40.799415 kubelet[2625]: E0413 23:39:40.798818 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:41.888096 kubelet[2625]: E0413 23:39:41.887640 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:41.911692 kubelet[2625]: E0413 23:39:41.906997 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:41.911692 kubelet[2625]: E0413 23:39:41.908453 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:41.911692 kubelet[2625]: E0413 23:39:41.909347 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:41.911692 kubelet[2625]: E0413 23:39:41.909493 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:41.911692 kubelet[2625]: E0413 23:39:41.909669 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:45.037600 kubelet[2625]: I0413 23:39:45.034738 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:39:45.862706 kubelet[2625]: E0413 23:39:45.862184 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:39:45.955063 kubelet[2625]: E0413 23:39:45.952955 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:39:46.500795 kubelet[2625]: E0413 23:39:46.490257 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:46.508689 kubelet[2625]: E0413 23:39:46.504559 2625 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:39:47.934055 kubelet[2625]: E0413 23:39:47.933544 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:39:54.250845 kubelet[2625]: E0413 23:39:54.246940 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:39:55.321438 kubelet[2625]: E0413 23:39:55.320269 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:39:57.991411 kubelet[2625]: E0413 23:39:57.990886 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:02.587519 kubelet[2625]: E0413 23:40:02.327172 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:40:02.921190 kubelet[2625]: E0413 23:40:02.913084 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:40:03.045492 kubelet[2625]: I0413 23:40:03.041089 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:40:04.224521 kubelet[2625]: E0413 23:40:04.223876 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:40:04.254078 kubelet[2625]: E0413 23:40:04.246881 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:40:06.811059 kubelet[2625]: E0413 23:40:06.808993 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:40:08.020496 kubelet[2625]: E0413 23:40:08.020041 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:11.769858 kubelet[2625]: E0413 23:40:11.690514 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:40:14.270551 kubelet[2625]: E0413 23:40:14.130892 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:40:14.767577 kubelet[2625]: E0413 23:40:14.765615 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:40:14.937155 kubelet[2625]: E0413 23:40:14.936399 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:40:18.078309 kubelet[2625]: E0413 23:40:18.077988 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:21.895734 kubelet[2625]: I0413 23:40:21.893000 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:40:23.035058 kubelet[2625]: E0413 23:40:23.028783 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:40:28.127181 kubelet[2625]: E0413 23:40:28.123996 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:28.817605 kubelet[2625]: E0413 23:40:28.808992 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:40:29.229274 kubelet[2625]: E0413 23:40:29.191096 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:40:32.203141 kubelet[2625]: E0413 23:40:32.198602 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:40:38.215093 kubelet[2625]: E0413 23:40:38.212984 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:39.928579 kubelet[2625]: I0413 23:40:39.924627 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:40:43.717483 kubelet[2625]: E0413 23:40:43.509077 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:40:45.907088 kubelet[2625]: E0413 23:40:45.900767 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:40:46.638477 kubelet[2625]: E0413 23:40:46.627670 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:40:48.243605 kubelet[2625]: E0413 23:40:48.240673 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:40:50.333519 kubelet[2625]: E0413 23:40:50.230390 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:40:53.385971 kubelet[2625]: E0413 23:40:53.384177 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:40:54.362157 kubelet[2625]: E0413 23:40:54.361614 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:40:54.468559 kubelet[2625]: E0413 23:40:54.456478 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:41:00.003655 kubelet[2625]: E0413 23:40:59.996405 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:00.003655 kubelet[2625]: I0413 23:40:59.999082 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:41:00.813474 kubelet[2625]: E0413 23:41:00.794709 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:41:03.687556 kubelet[2625]: E0413 23:41:03.686758 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:41:05.841294 kubelet[2625]: E0413 23:41:05.832945 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:41:06.536507 kubelet[2625]: E0413 23:41:06.509792 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:41:07.937332 kubelet[2625]: E0413 23:41:07.556282 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:41:10.018715 kubelet[2625]: E0413 23:41:10.017902 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:14.217094 kubelet[2625]: E0413 23:41:14.216078 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:41:14.294751 kubelet[2625]: E0413 23:41:14.224830 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:41:14.439756 kubelet[2625]: E0413 23:41:14.434611 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:41:20.120542 kubelet[2625]: E0413 23:41:20.116923 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:21.800415 kubelet[2625]: I0413 23:41:21.798967 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:41:25.199472 kubelet[2625]: E0413 23:41:25.197887 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:41:26.025563 kubelet[2625]: E0413 23:41:26.024786 2625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:41:30.239510 kubelet[2625]: E0413 23:41:30.202849 2625 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:41:30.254584 kubelet[2625]: E0413 23:41:30.247062 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:31.913162 kubelet[2625]: E0413 23:41:31.902737 2625 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:41:32.709133 kubelet[2625]: E0413 23:41:32.633975 2625 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:41:34.685086 kubelet[2625]: E0413 23:41:34.682636 2625 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:41:34.740060 kubelet[2625]: E0413 23:41:34.739857 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:41:40.355855 kubelet[2625]: E0413 23:41:40.355345 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:42.732749 kubelet[2625]: E0413 23:41:42.731788 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:41:44.676840 kubelet[2625]: I0413 23:41:44.676584 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:41:50.773556 kubelet[2625]: E0413 23:41:50.770868 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:41:57.664567 kubelet[2625]: I0413 23:41:57.178159 2625 apiserver.go:52] "Watching apiserver" Apr 13 23:42:00.385679 kubelet[2625]: E0413 23:42:00.370032 2625 request.go:1360] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 13 23:42:00.897451 kubelet[2625]: E0413 23:42:00.893723 2625 controller.go:145] "Failed to ensure lease exists, will retry" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" interval="7s" Apr 13 23:42:00.960621 kubelet[2625]: E0413 23:42:00.958449 2625 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:42:02.021508 update_engine[1585]: I20260413 23:42:02.015755 1585 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 23:42:02.021508 update_engine[1585]: I20260413 23:42:02.019467 1585 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 23:42:02.113376 update_engine[1585]: I20260413 23:42:02.103438 1585 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 23:42:02.260332 update_engine[1585]: I20260413 23:42:02.258013 1585 omaha_request_params.cc:62] Current group set to lts Apr 13 23:42:02.292735 update_engine[1585]: I20260413 23:42:02.277390 1585 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 23:42:02.292735 update_engine[1585]: I20260413 23:42:02.279080 1585 update_attempter.cc:643] Scheduling an action processor start. Apr 13 23:42:02.411138 update_engine[1585]: I20260413 23:42:02.295527 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:42:02.432976 update_engine[1585]: I20260413 23:42:02.420071 1585 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 23:42:02.461547 update_engine[1585]: I20260413 23:42:02.456841 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:42:02.470635 update_engine[1585]: I20260413 23:42:02.460093 1585 omaha_request_action.cc:272] Request: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: Apr 13 23:42:02.470635 update_engine[1585]: I20260413 23:42:02.465979 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:42:02.631511 kubelet[2625]: E0413 23:42:01.636939 2625 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60ef44d168a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,LastTimestamp:2026-04-13 23:39:05.472371294 +0000 UTC m=+6.047920652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:42:02.712539 kubelet[2625]: I0413 23:42:02.691093 2625 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:42:02.935472 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 23:42:03.035348 update_engine[1585]: I20260413 23:42:03.011039 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:42:03.215143 kubelet[2625]: I0413 23:42:03.209778 2625 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:42:03.251593 update_engine[1585]: I20260413 23:42:03.236970 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:42:03.414671 update_engine[1585]: E20260413 23:42:03.337010 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:42:03.477441 update_engine[1585]: I20260413 23:42:03.453670 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 23:42:04.583958 kubelet[2625]: I0413 23:42:04.582801 2625 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:42:09.280456 kubelet[2625]: I0413 23:42:09.276011 2625 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:42:10.347582 kubelet[2625]: I0413 23:42:10.343215 2625 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:42:14.036298 update_engine[1585]: I20260413 23:42:13.940002 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:42:14.221036 update_engine[1585]: I20260413 23:42:14.044083 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:42:14.221036 update_engine[1585]: I20260413 23:42:14.106719 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:42:14.221036 update_engine[1585]: E20260413 23:42:14.210587 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:42:14.299537 update_engine[1585]: I20260413 23:42:14.256699 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 23:42:15.885387 kubelet[2625]: E0413 23:42:15.870023 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.498s" Apr 13 23:42:16.428883 kubelet[2625]: E0413 23:42:16.428690 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:42:18.078721 kubelet[2625]: E0413 23:42:18.067461 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:42:20.507705 kubelet[2625]: E0413 23:42:20.375490 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:42:21.560687 kubelet[2625]: E0413 23:42:21.560125 2625 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 13 23:42:24.907911 update_engine[1585]: I20260413 23:42:24.900743 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:42:25.127846 update_engine[1585]: I20260413 23:42:25.095872 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:42:25.235068 update_engine[1585]: I20260413 23:42:25.214927 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:42:25.354146 update_engine[1585]: E20260413 23:42:25.353327 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:42:25.477537 update_engine[1585]: I20260413 23:42:25.399462 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 23:42:31.781431 kubelet[2625]: E0413 23:42:31.453166 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:42:32.684503 kubelet[2625]: E0413 23:42:32.680528 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.619s" Apr 13 23:42:35.898445 update_engine[1585]: I20260413 23:42:35.893541 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:42:36.033526 update_engine[1585]: I20260413 23:42:36.030292 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:42:36.095520 update_engine[1585]: I20260413 23:42:36.091081 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:42:36.131090 update_engine[1585]: E20260413 23:42:36.113776 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.126080 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.230500 1585 omaha_request_action.cc:617] Omaha request response: Apr 13 23:42:36.305895 update_engine[1585]: E20260413 23:42:36.234488 1585 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.241977 1585 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.242197 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.242208 1585 update_attempter.cc:306] Processing Done. Apr 13 23:42:36.305895 update_engine[1585]: E20260413 23:42:36.242343 1585 update_attempter.cc:619] Update failed. Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.242416 1585 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.242422 1585 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.242429 1585 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.247396 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.252372 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:42:36.305895 update_engine[1585]: I20260413 23:42:36.252624 1585 omaha_request_action.cc:272] Request: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.305895 update_engine[1585]: Apr 13 23:42:36.500320 update_engine[1585]: I20260413 23:42:36.252635 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:42:36.500320 update_engine[1585]: I20260413 23:42:36.424350 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:42:36.500320 update_engine[1585]: I20260413 23:42:36.479565 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:42:36.500501 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 23:42:36.515919 update_engine[1585]: E20260413 23:42:36.500212 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.500504 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.500515 1585 omaha_request_action.cc:617] Omaha request response: Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.500590 1585 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.500597 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.511528 1585 update_attempter.cc:306] Processing Done. Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.511767 1585 update_attempter.cc:310] Error event sent. Apr 13 23:42:36.515919 update_engine[1585]: I20260413 23:42:36.511979 1585 update_check_scheduler.cc:74] Next update check in 48m17s Apr 13 23:42:37.318857 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 23:42:40.663004 kubelet[2625]: E0413 23:42:40.638707 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:42:42.804940 kubelet[2625]: I0413 23:42:42.795350 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=37.794895803 podStartE2EDuration="37.794895803s" podCreationTimestamp="2026-04-13 23:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:42:42.53602317 +0000 UTC m=+223.111572519" watchObservedRunningTime="2026-04-13 23:42:42.794895803 +0000 UTC m=+223.370445132" Apr 13 23:42:42.804940 kubelet[2625]: E0413 23:42:42.797362 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.096s" Apr 13 23:42:49.226572 kubelet[2625]: E0413 23:42:49.200960 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:42:49.565404 kubelet[2625]: I0413 23:42:49.399309 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=40.397454739 podStartE2EDuration="40.397454739s" podCreationTimestamp="2026-04-13 23:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:42:48.936068378 +0000 UTC m=+229.511617702" watchObservedRunningTime="2026-04-13 23:42:49.397454739 +0000 UTC m=+229.973004071" Apr 13 23:42:56.480385 kubelet[2625]: E0413 23:42:56.478634 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:42:59.008045 kubelet[2625]: E0413 23:42:59.007400 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.345s" Apr 13 23:43:06.188141 kubelet[2625]: E0413 23:43:06.187136 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:43:20.405393 kubelet[2625]: E0413 23:43:20.396736 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:43:21.607213 kubelet[2625]: E0413 23:43:21.605535 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.279s" Apr 13 23:43:38.328681 kubelet[2625]: E0413 23:43:38.132590 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:43:49.179604 kubelet[2625]: E0413 23:43:49.173739 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:00.084564 kubelet[2625]: E0413 23:44:00.083689 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:01.213580 kubelet[2625]: E0413 23:44:01.210909 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="39.604s" Apr 13 23:44:04.385470 kubelet[2625]: E0413 23:44:04.379390 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.018s" Apr 13 23:44:07.012460 kubelet[2625]: E0413 23:44:06.635321 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.205s" Apr 13 23:44:07.528648 kubelet[2625]: E0413 23:44:07.310018 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:19.554908 kubelet[2625]: E0413 23:44:18.019948 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:22.628469 kubelet[2625]: E0413 23:44:22.617318 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.61s" Apr 13 23:44:25.352908 kubelet[2625]: E0413 23:44:25.334269 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:44:27.884552 kubelet[2625]: E0413 23:44:27.791196 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:44:27.884552 kubelet[2625]: E0413 23:44:27.882288 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:44:29.114463 kubelet[2625]: E0413 23:44:29.113599 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:37.835084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa-rootfs.mount: Deactivated successfully. Apr 13 23:44:38.138564 containerd[1599]: time="2026-04-13T23:44:38.100763096Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 13 23:44:38.243558 kubelet[2625]: E0413 23:44:38.176517 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.552s" Apr 13 23:44:38.295059 containerd[1599]: time="2026-04-13T23:44:38.127979747Z" level=error msg="failed to handle container TaskExit event container_id:\"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" id:\"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" pid:2830 exit_status:1 exited_at:{seconds:1776123865 nanos:25708641}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:44:39.028683 kubelet[2625]: E0413 23:44:39.025771 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:39.723406 containerd[1599]: time="2026-04-13T23:44:39.699092639Z" level=info msg="TaskExit event container_id:\"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" id:\"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" pid:2830 exit_status:1 exited_at:{seconds:1776123865 nanos:25708641}" Apr 13 23:44:47.633934 kubelet[2625]: E0413 23:44:47.629748 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:44:50.178670 containerd[1599]: time="2026-04-13T23:44:49.940308536Z" level=error msg="failed to shutdown shim task and the shim might be leaked" error="context deadline exceeded: unknown" id=114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa Apr 13 23:44:50.743537 containerd[1599]: time="2026-04-13T23:44:50.691828853Z" level=info msg="shim disconnected" id=114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa namespace=k8s.io Apr 13 23:44:50.880323 containerd[1599]: time="2026-04-13T23:44:50.804337559Z" level=warning msg="cleaning up after shim disconnected" id=114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa namespace=k8s.io Apr 13 23:44:51.496897 containerd[1599]: time="2026-04-13T23:44:51.010050572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:44:51.978476 containerd[1599]: time="2026-04-13T23:44:51.827367647Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa Apr 13 23:44:52.969492 containerd[1599]: time="2026-04-13T23:44:52.734665563Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa delete" error="fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" namespace=k8s.io Apr 13 23:44:53.111461 containerd[1599]: time="2026-04-13T23:44:53.081319902Z" level=warning msg="failed to clean up after shim disconnected" error=": fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" id=114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa namespace=k8s.io Apr 13 23:44:58.775879 kubelet[2625]: E0413 23:44:58.764670 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.587s" Apr 13 23:44:59.352609 kubelet[2625]: E0413 23:44:59.231388 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:02.128560 kubelet[2625]: E0413 23:45:02.125232 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.217s" Apr 13 23:45:06.409875 kubelet[2625]: E0413 23:45:06.405359 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.183s" Apr 13 23:45:06.737896 kubelet[2625]: E0413 23:45:06.726955 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:07.540739 kubelet[2625]: I0413 23:45:07.540305 2625 scope.go:117] "RemoveContainer" containerID="114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa" Apr 13 23:45:08.040476 kubelet[2625]: E0413 23:45:08.032928 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:45:15.988710 kubelet[2625]: I0413 23:45:15.988237 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=183.987983953 podStartE2EDuration="3m3.987983953s" podCreationTimestamp="2026-04-13 23:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:42:56.40247033 +0000 UTC m=+236.978019655" watchObservedRunningTime="2026-04-13 23:45:15.987983953 +0000 UTC m=+376.563533275" Apr 13 23:45:16.327868 kubelet[2625]: E0413 23:45:16.309051 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:17.730372 kubelet[2625]: E0413 23:45:17.729191 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.319s" Apr 13 23:45:18.611663 containerd[1599]: time="2026-04-13T23:45:18.604660437Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 23:45:21.862936 kubelet[2625]: E0413 23:45:21.827374 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.085s" Apr 13 23:45:23.177095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3368264298.mount: Deactivated successfully. Apr 13 23:45:24.240392 kubelet[2625]: E0413 23:45:24.232821 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:25.710626 containerd[1599]: time="2026-04-13T23:45:25.705449267Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\"" Apr 13 23:45:26.131444 kubelet[2625]: E0413 23:45:26.122687 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:45:28.111458 containerd[1599]: time="2026-04-13T23:45:28.105387911Z" level=info msg="StartContainer for \"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\"" Apr 13 23:45:30.899903 kubelet[2625]: E0413 23:45:30.889552 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:32.253693 kubelet[2625]: E0413 23:45:32.230436 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.244s" Apr 13 23:45:37.322981 kubelet[2625]: E0413 23:45:37.321695 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:37.362720 kubelet[2625]: E0413 23:45:37.358976 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.996s" Apr 13 23:45:38.872401 kubelet[2625]: E0413 23:45:38.861511 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.486s" Apr 13 23:45:39.399570 containerd[1599]: time="2026-04-13T23:45:39.396853770Z" level=error msg="get state for cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5" error="context deadline exceeded: unknown" Apr 13 23:45:39.423456 containerd[1599]: time="2026-04-13T23:45:39.400471379Z" level=warning msg="unknown status" status=0 Apr 13 23:45:40.485566 containerd[1599]: time="2026-04-13T23:45:40.475226138Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 13 23:45:40.909039 kubelet[2625]: E0413 23:45:40.905588 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.023s" Apr 13 23:45:44.333454 kubelet[2625]: E0413 23:45:44.295725 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:45.138800 containerd[1599]: time="2026-04-13T23:45:45.131933375Z" level=info msg="StartContainer for \"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" returns successfully" Apr 13 23:45:47.727269 kubelet[2625]: E0413 23:45:47.716946 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.786s" Apr 13 23:45:51.486955 kubelet[2625]: E0413 23:45:51.401645 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:45:55.403038 kubelet[2625]: E0413 23:45:55.396542 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:45:59.054147 kubelet[2625]: E0413 23:45:59.039133 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.051s" Apr 13 23:46:01.607617 systemd[1]: Reloading requested from client PID 3000 ('systemctl') (unit session-9.scope)... Apr 13 23:46:01.608540 systemd[1]: Reloading... Apr 13 23:46:02.483849 kubelet[2625]: E0413 23:46:02.482965 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:46:05.513468 zram_generator::config[3035]: No configuration found. Apr 13 23:46:08.239562 kubelet[2625]: E0413 23:46:08.199045 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.892s" Apr 13 23:46:08.923980 kubelet[2625]: E0413 23:46:08.922329 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:46:09.713785 kubelet[2625]: E0413 23:46:09.709019 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:46:10.162826 kubelet[2625]: E0413 23:46:10.092080 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:46:12.091266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:46:15.681633 kubelet[2625]: E0413 23:46:15.679399 2625 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:46:16.086013 kubelet[2625]: E0413 23:46:16.078708 2625 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.39s" Apr 13 23:46:16.292089 systemd[1]: Reloading finished in 14673 ms. Apr 13 23:46:16.787912 kubelet[2625]: E0413 23:46:16.787405 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:46:19.360755 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:46:19.879982 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:46:19.898757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:46:20.687002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:46:24.038992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:46:24.295688 (kubelet)[3095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:46:31.728479 kubelet[3095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:46:31.728479 kubelet[3095]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:46:31.728479 kubelet[3095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:46:31.815954 kubelet[3095]: I0413 23:46:31.728617 3095 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:46:32.291775 kubelet[3095]: I0413 23:46:32.290429 3095 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:46:32.304720 kubelet[3095]: I0413 23:46:32.300653 3095 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:46:32.422548 kubelet[3095]: I0413 23:46:32.417513 3095 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:46:32.641175 kubelet[3095]: I0413 23:46:32.638370 3095 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 23:46:33.051365 kubelet[3095]: I0413 23:46:33.035634 3095 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:46:35.918190 kubelet[3095]: E0413 23:46:35.840921 3095 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:46:35.937778 kubelet[3095]: I0413 23:46:35.920945 3095 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:46:36.828932 kubelet[3095]: I0413 23:46:36.824435 3095 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:46:36.924651 kubelet[3095]: I0413 23:46:36.914987 3095 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:46:36.971467 kubelet[3095]: I0413 23:46:36.931880 3095 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 23:46:36.986408 kubelet[3095]: I0413 23:46:36.976611 3095 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:46:36.986408 kubelet[3095]: I0413 23:46:36.982949 3095 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:46:36.997295 kubelet[3095]: I0413 23:46:36.989952 3095 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:46:37.112346 kubelet[3095]: I0413 23:46:37.102392 3095 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:46:37.116963 kubelet[3095]: I0413 23:46:37.112383 3095 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:46:37.185464 kubelet[3095]: I0413 23:46:37.181414 3095 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:46:37.228089 kubelet[3095]: I0413 23:46:37.220484 3095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:46:38.175676 kubelet[3095]: I0413 23:46:38.171877 3095 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:46:39.003660 kubelet[3095]: I0413 23:46:38.984526 3095 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:46:39.969623 kubelet[3095]: I0413 23:46:39.962833 3095 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:46:40.028802 kubelet[3095]: I0413 23:46:40.025839 3095 server.go:1289] "Started kubelet" Apr 13 23:46:40.093431 kubelet[3095]: I0413 23:46:40.063768 3095 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:46:40.318763 kubelet[3095]: I0413 23:46:40.269495 3095 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:46:41.574840 kubelet[3095]: I0413 23:46:41.574423 3095 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:46:44.020395 kubelet[3095]: I0413 23:46:44.016438 3095 apiserver.go:52] "Watching apiserver" Apr 13 23:46:45.259475 kubelet[3095]: I0413 23:46:45.253796 3095 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:46:49.259381 kubelet[3095]: I0413 23:46:49.228876 3095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:46:49.384934 kubelet[3095]: I0413 23:46:49.382486 3095 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:46:50.609782 kubelet[3095]: I0413 23:46:50.608918 3095 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:46:51.092929 kubelet[3095]: I0413 23:46:51.092180 3095 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:46:52.924544 kubelet[3095]: I0413 23:46:52.916205 3095 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:46:53.930187 sudo[3116]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 23:46:53.970698 sudo[3116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 23:46:58.358405 kubelet[3095]: I0413 23:46:58.343463 3095 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:46:58.829980 kubelet[3095]: E0413 23:46:58.148963 3095 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:46:59.406502 kubelet[3095]: I0413 23:46:59.289099 3095 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:47:01.421775 kubelet[3095]: W0413 23:47:01.415717 3095 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 13 23:47:02.315994 kubelet[3095]: I0413 23:47:02.313377 3095 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: context deadline exceeded Apr 13 23:47:03.309394 kubelet[3095]: W0413 23:47:03.296046 3095 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: failed to write client preface: write unix @->/run/containerd/containerd.sock: use of closed network connection" Apr 13 23:47:04.049360 kubelet[3095]: I0413 23:47:03.452306 3095 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:47:04.049360 kubelet[3095]: I0413 23:47:04.119601 3095 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:47:04.049360 kubelet[3095]: I0413 23:47:04.122961 3095 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:47:04.478429 kubelet[3095]: I0413 23:47:04.402079 3095 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:47:04.814506 kubelet[3095]: I0413 23:47:04.790201 3095 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:47:06.680535 kubelet[3095]: E0413 23:47:06.602690 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:06.992789 kubelet[3095]: E0413 23:47:06.968427 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:07.408690 kubelet[3095]: E0413 23:47:07.401972 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:08.133516 kubelet[3095]: E0413 23:47:08.129426 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:09.124429 kubelet[3095]: E0413 23:47:09.116833 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:11.099476 kubelet[3095]: E0413 23:47:10.943497 3095 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:47:12.297740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a-rootfs.mount: Deactivated successfully. Apr 13 23:47:12.898262 containerd[1599]: time="2026-04-13T23:47:12.889137877Z" level=info msg="shim disconnected" id=abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a namespace=k8s.io Apr 13 23:47:13.003408 containerd[1599]: time="2026-04-13T23:47:13.001089730Z" level=warning msg="cleaning up after shim disconnected" id=abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a namespace=k8s.io Apr 13 23:47:13.003408 containerd[1599]: time="2026-04-13T23:47:13.001504702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:47:14.290096 kubelet[3095]: E0413 23:47:14.284912 3095 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:47:15.885095 containerd[1599]: time="2026-04-13T23:47:15.867486293Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 13 23:47:15.886747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5-rootfs.mount: Deactivated successfully. Apr 13 23:47:16.284819 containerd[1599]: time="2026-04-13T23:47:16.280539839Z" level=error msg="failed to handle container TaskExit event container_id:\"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" id:\"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" pid:2980 exit_status:1 exited_at:{seconds:1776124024 nanos:102016085}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:47:17.894260 containerd[1599]: time="2026-04-13T23:47:17.806410591Z" level=info msg="TaskExit event container_id:\"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" id:\"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" pid:2980 exit_status:1 exited_at:{seconds:1776124024 nanos:102016085}" Apr 13 23:47:19.381337 kubelet[3095]: E0413 23:47:19.362435 3095 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:47:23.235701 sudo[3116]: pam_unix(sudo:session): session closed for user root Apr 13 23:47:24.443497 kubelet[3095]: E0413 23:47:24.406430 3095 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:47:25.910041 containerd[1599]: time="2026-04-13T23:47:25.814986582Z" level=info msg="shim disconnected" id=cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5 namespace=k8s.io Apr 13 23:47:26.534899 containerd[1599]: time="2026-04-13T23:47:26.288078046Z" level=warning msg="cleaning up after shim disconnected" id=cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5 namespace=k8s.io Apr 13 23:47:26.642466 containerd[1599]: time="2026-04-13T23:47:26.598256105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:47:29.485997 containerd[1599]: time="2026-04-13T23:47:29.391544168Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5 Apr 13 23:47:29.715610 kubelet[3095]: E0413 23:47:29.697092 3095 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:47:29.810645 kubelet[3095]: I0413 23:47:29.809710 3095 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:47:29.920000 kubelet[3095]: I0413 23:47:29.899964 3095 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:47:29.920000 kubelet[3095]: I0413 23:47:29.905742 3095 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:47:30.129086 kubelet[3095]: I0413 23:47:30.121615 3095 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 23:47:30.196276 kubelet[3095]: I0413 23:47:30.125927 3095 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 23:47:30.214075 kubelet[3095]: I0413 23:47:30.209573 3095 policy_none.go:49] "None policy: Start" Apr 13 23:47:30.214075 kubelet[3095]: I0413 23:47:30.210429 3095 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:47:30.214075 kubelet[3095]: I0413 23:47:30.210616 3095 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:47:30.462664 kubelet[3095]: I0413 23:47:30.431658 3095 state_mem.go:75] "Updated machine memory state" Apr 13 23:47:32.996751 kubelet[3095]: E0413 23:47:32.975496 3095 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:47:33.165555 kubelet[3095]: I0413 23:47:33.160584 3095 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:47:33.433834 kubelet[3095]: I0413 23:47:33.281446 3095 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:47:33.977316 kubelet[3095]: I0413 23:47:33.976395 3095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:47:34.593581 kubelet[3095]: E0413 23:47:34.562507 3095 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:47:35.455735 kubelet[3095]: I0413 23:47:35.451868 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:47:35.597306 kubelet[3095]: I0413 23:47:35.494024 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:47:35.639155 kubelet[3095]: I0413 23:47:35.633433 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/867491ae9245c87a1735f75bc55f305c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"867491ae9245c87a1735f75bc55f305c\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:47:35.926693 kubelet[3095]: I0413 23:47:35.920060 3095 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:47:36.318335 kubelet[3095]: I0413 23:47:36.317453 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:36.456534 kubelet[3095]: I0413 23:47:36.454874 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:36.456534 kubelet[3095]: I0413 23:47:36.455050 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:36.502986 kubelet[3095]: I0413 23:47:36.501894 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:36.535591 kubelet[3095]: I0413 23:47:36.533540 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:37.105550 kubelet[3095]: I0413 23:47:37.031963 3095 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:37.423561 kubelet[3095]: I0413 23:47:37.420689 3095 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:47:37.998940 kubelet[3095]: I0413 23:47:37.997641 3095 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:47:38.127152 kubelet[3095]: I0413 23:47:38.040425 3095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:47:39.417995 kubelet[3095]: E0413 23:47:39.401785 3095 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 13 23:47:39.504678 kubelet[3095]: E0413 23:47:39.474489 3095 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:47:39.504678 kubelet[3095]: E0413 23:47:39.495588 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:47:40.072340 kubelet[3095]: E0413 23:47:40.060454 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:47:41.142525 kubelet[3095]: I0413 23:47:41.032437 3095 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:47:41.390866 kubelet[3095]: E0413 23:47:41.307056 3095 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 13 23:47:41.390866 kubelet[3095]: I0413 23:47:41.311701 3095 scope.go:117] "RemoveContainer" containerID="abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:47:41.865671 kubelet[3095]: E0413 23:47:41.837034 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:47:47.181016 kubelet[3095]: I0413 23:47:47.178527 3095 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 13 23:47:47.505139 kubelet[3095]: I0413 23:47:47.190014 3095 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:47:49.376484 kubelet[3095]: E0413 23:47:49.311015 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.31s" Apr 13 23:47:49.771703 containerd[1599]: time="2026-04-13T23:47:49.340899433Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 23:47:59.153446 kubelet[3095]: I0413 23:47:59.139532 3095 scope.go:117] "RemoveContainer" containerID="114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa" Apr 13 23:48:03.872967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917560487.mount: Deactivated successfully. Apr 13 23:48:06.741890 kubelet[3095]: I0413 23:48:06.741332 3095 scope.go:117] "RemoveContainer" containerID="cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5" Apr 13 23:48:06.863325 kubelet[3095]: E0413 23:48:06.823914 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:48:07.982385 containerd[1599]: time="2026-04-13T23:48:07.981840404Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\"" Apr 13 23:48:09.340741 kubelet[3095]: E0413 23:48:09.330494 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.256s" Apr 13 23:48:09.421034 kubelet[3095]: E0413 23:48:09.419143 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:48:10.409521 kubelet[3095]: I0413 23:48:10.401682 3095 scope.go:117] "RemoveContainer" containerID="114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa" Apr 13 23:48:10.493343 containerd[1599]: time="2026-04-13T23:48:10.489568440Z" level=info msg="RemoveContainer for \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\"" Apr 13 23:48:13.095460 containerd[1599]: time="2026-04-13T23:48:13.093430302Z" level=info msg="StartContainer for \"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\"" Apr 13 23:48:13.324834 containerd[1599]: time="2026-04-13T23:48:13.236518867Z" level=info msg="RemoveContainer for \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" returns successfully" Apr 13 23:48:14.236440 containerd[1599]: time="2026-04-13T23:48:14.234600694Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 13 23:48:16.651226 containerd[1599]: time="2026-04-13T23:48:16.323584346Z" level=error msg="ContainerStatus for \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\": not found" Apr 13 23:48:18.938618 kubelet[3095]: E0413 23:48:18.369967 3095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\": not found" containerID="114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa" Apr 13 23:48:20.010628 kubelet[3095]: I0413 23:48:19.989734 3095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa"} err="failed to get container status \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"114f7ec35fece13e68a79d4358a112034d77f47fd7d45d6f5ce881d815496eaa\": not found" Apr 13 23:48:32.237238 kubelet[3095]: E0413 23:48:32.236714 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.598s" Apr 13 23:48:33.236461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971901844.mount: Deactivated successfully. Apr 13 23:48:33.386441 kubelet[3095]: E0413 23:48:32.913663 3095 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod39798d73a6894e44ae801eb773bf9a39/ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\": RecentStats: unable to find data in memory cache]" Apr 13 23:48:35.311654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046578956.mount: Deactivated successfully. Apr 13 23:48:38.791590 containerd[1599]: time="2026-04-13T23:48:38.785296521Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\"" Apr 13 23:48:40.703530 containerd[1599]: time="2026-04-13T23:48:40.622750267Z" level=info msg="StartContainer for \"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\"" Apr 13 23:48:42.264524 containerd[1599]: time="2026-04-13T23:48:42.166494782Z" level=info msg="StartContainer for \"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" returns successfully" Apr 13 23:49:06.715711 kubelet[3095]: E0413 23:49:06.630950 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.958s" Apr 13 23:49:25.677935 containerd[1599]: time="2026-04-13T23:49:25.675243429Z" level=info msg="StartContainer for \"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" returns successfully" Apr 13 23:49:30.015449 kubelet[3095]: E0413 23:49:30.013915 3095 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 13 23:49:35.899750 kubelet[3095]: E0413 23:49:35.854010 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="28.996s" Apr 13 23:49:36.602930 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 13 23:49:37.158952 kubelet[3095]: E0413 23:49:37.151893 3095 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 23:49:39.209283 systemd-tmpfiles[3297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:49:39.225371 systemd-tmpfiles[3297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:49:39.237341 systemd-tmpfiles[3297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:49:39.237753 systemd-tmpfiles[3297]: ACLs are not supported, ignoring. Apr 13 23:49:39.242867 systemd-tmpfiles[3297]: ACLs are not supported, ignoring. Apr 13 23:49:39.837752 systemd-tmpfiles[3297]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:49:39.837763 systemd-tmpfiles[3297]: Skipping /boot Apr 13 23:49:40.824698 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 13 23:49:40.847967 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 13 23:50:04.930587 kubelet[3095]: E0413 23:50:03.948884 3095 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 13 23:50:13.878759 kubelet[3095]: E0413 23:50:12.935853 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:50:25.779499 kubelet[3095]: E0413 23:50:25.760168 3095 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 13 23:50:28.598699 kubelet[3095]: E0413 23:50:28.590286 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:50:37.594752 kubelet[3095]: E0413 23:50:37.501678 3095 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 13 23:50:40.620666 kubelet[3095]: E0413 23:50:40.603387 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:50:42.210464 containerd[1599]: time="2026-04-13T23:50:41.818949978Z" level=error msg="failed to handle container TaskExit event container_id:\"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" id:\"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" pid:3269 exit_status:1 exited_at:{seconds:1776124230 nanos:983186694}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:50:42.475655 containerd[1599]: time="2026-04-13T23:50:41.987010797Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 13 23:50:42.615736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34-rootfs.mount: Deactivated successfully. Apr 13 23:50:45.125703 containerd[1599]: time="2026-04-13T23:50:44.893860981Z" level=info msg="TaskExit event container_id:\"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" id:\"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" pid:3269 exit_status:1 exited_at:{seconds:1776124230 nanos:983186694}" Apr 13 23:50:46.365355 kubelet[3095]: E0413 23:50:46.361758 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m9.922s" Apr 13 23:50:47.407322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff-rootfs.mount: Deactivated successfully. Apr 13 23:50:47.642267 containerd[1599]: time="2026-04-13T23:50:47.161537651Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 13 23:50:47.642267 containerd[1599]: time="2026-04-13T23:50:47.470994673Z" level=error msg="failed to handle container TaskExit event container_id:\"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" id:\"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" pid:3232 exit_status:1 exited_at:{seconds:1776124235 nanos:381054698}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:50:48.528679 kubelet[3095]: E0413 23:50:48.521474 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:50:50.483567 kubelet[3095]: E0413 23:50:50.197645 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:50:51.029499 containerd[1599]: time="2026-04-13T23:50:51.023624324Z" level=info msg="shim disconnected" id=4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34 namespace=k8s.io Apr 13 23:50:51.126725 containerd[1599]: time="2026-04-13T23:50:51.037735633Z" level=warning msg="cleaning up after shim disconnected" id=4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34 namespace=k8s.io Apr 13 23:50:51.173556 containerd[1599]: time="2026-04-13T23:50:51.152077505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:50:54.840942 containerd[1599]: time="2026-04-13T23:50:54.830387971Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34 Apr 13 23:50:55.099702 containerd[1599]: time="2026-04-13T23:50:55.095869861Z" level=info msg="TaskExit event container_id:\"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" id:\"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" pid:3232 exit_status:1 exited_at:{seconds:1776124235 nanos:381054698}" Apr 13 23:50:55.802785 containerd[1599]: time="2026-04-13T23:50:55.778877624Z" level=warning msg="cleanup warnings time=\"2026-04-13T23:50:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 23:50:57.794168 kubelet[3095]: E0413 23:50:57.789723 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:50:59.003578 kubelet[3095]: E0413 23:50:59.000299 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:51:00.607686 containerd[1599]: time="2026-04-13T23:51:00.590662046Z" level=info msg="shim disconnected" id=ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff namespace=k8s.io Apr 13 23:51:00.686779 containerd[1599]: time="2026-04-13T23:51:00.600890031Z" level=warning msg="cleaning up after shim disconnected" id=ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff namespace=k8s.io Apr 13 23:51:00.686779 containerd[1599]: time="2026-04-13T23:51:00.631594767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:51:04.968738 kubelet[3095]: E0413 23:51:04.895648 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:05.017346 kubelet[3095]: E0413 23:51:04.976839 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.599s" Apr 13 23:51:05.017346 kubelet[3095]: E0413 23:51:04.983854 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:51:05.110973 kubelet[3095]: I0413 23:51:05.110039 3095 status_manager.go:355] "Container readiness changed for unknown container" pod="kube-system/kube-scheduler-localhost" containerID="containerd://abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:51:05.120024 kubelet[3095]: I0413 23:51:05.114665 3095 status_manager.go:418] "Container startup changed for unknown container" pod="kube-system/kube-scheduler-localhost" containerID="containerd://abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:51:05.127539 kubelet[3095]: I0413 23:51:05.125924 3095 status_manager.go:418] "Container startup changed for unknown container" pod="kube-system/kube-controller-manager-localhost" containerID="containerd://cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5" Apr 13 23:51:05.224347 containerd[1599]: time="2026-04-13T23:51:05.222650835Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff Apr 13 23:51:10.201534 kubelet[3095]: E0413 23:51:10.190073 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.058s" Apr 13 23:51:10.712542 kubelet[3095]: E0413 23:51:10.707090 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:12.026470 kubelet[3095]: I0413 23:51:12.024597 3095 scope.go:117] "RemoveContainer" containerID="ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff" Apr 13 23:51:12.106487 kubelet[3095]: E0413 23:51:12.028540 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:51:12.170831 kubelet[3095]: I0413 23:51:12.161703 3095 scope.go:117] "RemoveContainer" containerID="cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5" Apr 13 23:51:12.306456 kubelet[3095]: E0413 23:51:12.293849 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.042s" Apr 13 23:51:14.182538 kubelet[3095]: E0413 23:51:14.177888 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.881s" Apr 13 23:51:14.238525 containerd[1599]: time="2026-04-13T23:51:14.112392750Z" level=info msg="RemoveContainer for \"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\"" Apr 13 23:51:15.381450 containerd[1599]: time="2026-04-13T23:51:15.358074676Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 13 23:51:15.509192 kubelet[3095]: I0413 23:51:15.507618 3095 scope.go:117] "RemoveContainer" containerID="4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34" Apr 13 23:51:15.824917 kubelet[3095]: E0413 23:51:15.805957 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:51:16.248647 containerd[1599]: time="2026-04-13T23:51:16.237348862Z" level=info msg="RemoveContainer for \"cd83f73202aa9f1cb3056dbadd9881780d246ac6db583bab6175de30c359a3a5\" returns successfully" Apr 13 23:51:16.487778 kubelet[3095]: E0413 23:51:16.484379 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:16.487778 kubelet[3095]: I0413 23:51:16.485082 3095 scope.go:117] "RemoveContainer" containerID="abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:51:16.577720 kubelet[3095]: E0413 23:51:16.577478 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.396s" Apr 13 23:51:18.407642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581010630.mount: Deactivated successfully. Apr 13 23:51:19.626695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509869233.mount: Deactivated successfully. Apr 13 23:51:20.617452 containerd[1599]: time="2026-04-13T23:51:20.491860162Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\"" Apr 13 23:51:21.094497 containerd[1599]: time="2026-04-13T23:51:21.093456631Z" level=info msg="StartContainer for \"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\"" Apr 13 23:51:21.583565 containerd[1599]: time="2026-04-13T23:51:21.494860713Z" level=info msg="RemoveContainer for \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\"" Apr 13 23:51:21.805800 kubelet[3095]: E0413 23:51:21.804425 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.793s" Apr 13 23:51:21.959882 containerd[1599]: time="2026-04-13T23:51:21.919132951Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 13 23:51:22.943439 kubelet[3095]: E0413 23:51:22.923125 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:23.588460 kubelet[3095]: E0413 23:51:23.583034 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:51:23.800457 containerd[1599]: time="2026-04-13T23:51:23.799351568Z" level=info msg="RemoveContainer for \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\" returns successfully" Apr 13 23:51:25.640339 containerd[1599]: time="2026-04-13T23:51:25.621035114Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\"" Apr 13 23:51:25.691293 kubelet[3095]: E0413 23:51:25.690339 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.595s" Apr 13 23:51:25.716131 kubelet[3095]: I0413 23:51:25.715780 3095 scope.go:117] "RemoveContainer" containerID="abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:51:26.615412 containerd[1599]: time="2026-04-13T23:51:26.578961541Z" level=error msg="ContainerStatus for \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\": not found" Apr 13 23:51:27.053798 kubelet[3095]: E0413 23:51:27.049972 3095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\": not found" containerID="abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a" Apr 13 23:51:27.135122 kubelet[3095]: I0413 23:51:27.054411 3095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a"} err="failed to get container status \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"abd059c5222283a91d0160b240534b84d1d2035b3ef273a4e0803d5aab1d1a4a\": not found" Apr 13 23:51:27.171370 containerd[1599]: time="2026-04-13T23:51:27.132914583Z" level=info msg="StartContainer for \"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\"" Apr 13 23:51:30.233618 kubelet[3095]: E0413 23:51:30.230720 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:35.289519 kubelet[3095]: E0413 23:51:35.275884 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.45s" Apr 13 23:51:40.606703 kubelet[3095]: E0413 23:51:40.542182 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:51:46.772333 kubelet[3095]: E0413 23:51:46.759784 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.421s" Apr 13 23:51:47.313300 containerd[1599]: time="2026-04-13T23:51:46.806703162Z" level=info msg="StartContainer for \"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" returns successfully" Apr 13 23:51:51.046933 containerd[1599]: time="2026-04-13T23:51:51.042891235Z" level=info msg="StartContainer for \"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" returns successfully" Apr 13 23:51:55.077844 kubelet[3095]: E0413 23:51:54.487671 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:03.289479 kubelet[3095]: E0413 23:52:02.956061 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:16.608331 kubelet[3095]: E0413 23:52:16.607224 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:17.337604 kubelet[3095]: E0413 23:52:17.332776 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.564s" Apr 13 23:52:18.135453 kubelet[3095]: E0413 23:52:18.133822 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:22.203701 kubelet[3095]: E0413 23:52:22.186862 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:24.374421 kubelet[3095]: E0413 23:52:24.340415 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:24.427016 kubelet[3095]: E0413 23:52:24.425702 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.028s" Apr 13 23:52:29.303098 kubelet[3095]: E0413 23:52:29.300278 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:36.340892 kubelet[3095]: E0413 23:52:36.330588 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:38.128405 kubelet[3095]: E0413 23:52:38.126821 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.688s" Apr 13 23:52:39.480990 kubelet[3095]: E0413 23:52:39.225840 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:44.117972 kubelet[3095]: E0413 23:52:44.116521 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:44.427382 kubelet[3095]: E0413 23:52:44.399964 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:50.364450 kubelet[3095]: E0413 23:52:50.353609 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:52:51.943633 kubelet[3095]: E0413 23:52:51.717758 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:53.000047 kubelet[3095]: E0413 23:52:52.998740 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.842s" Apr 13 23:52:54.618612 kubelet[3095]: E0413 23:52:54.611598 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:52:56.423217 sudo[1827]: pam_unix(sudo:session): session closed for user root Apr 13 23:52:56.505609 sshd[1821]: pam_unix(sshd:session): session closed for user core Apr 13 23:52:57.450160 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:34434.service: Deactivated successfully. Apr 13 23:52:57.591476 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 23:52:57.929522 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Apr 13 23:52:58.132687 systemd-logind[1573]: Removed session 9. Apr 13 23:52:58.642371 kubelet[3095]: E0413 23:52:58.639652 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:00.798424 kubelet[3095]: E0413 23:53:00.795583 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.729s" Apr 13 23:53:01.408621 kubelet[3095]: E0413 23:53:01.407569 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:53:05.312410 kubelet[3095]: E0413 23:53:05.308358 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:07.824680 kubelet[3095]: E0413 23:53:07.815694 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.654s" Apr 13 23:53:14.118374 kubelet[3095]: E0413 23:53:14.113956 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:22.551402 kubelet[3095]: E0413 23:53:22.548560 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:24.681448 kubelet[3095]: E0413 23:53:24.678519 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.545s" Apr 13 23:53:29.878014 kubelet[3095]: E0413 23:53:29.757767 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:36.418535 kubelet[3095]: E0413 23:53:36.410980 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.692s" Apr 13 23:53:37.098944 kubelet[3095]: E0413 23:53:37.097579 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:38.236544 kubelet[3095]: E0413 23:53:38.214522 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:53:40.168667 kubelet[3095]: E0413 23:53:39.976192 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.458s" Apr 13 23:53:49.360621 kubelet[3095]: E0413 23:53:49.210084 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:53:51.715146 kubelet[3095]: E0413 23:53:51.644566 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:53:57.396969 kubelet[3095]: E0413 23:53:57.382993 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:06.082451 kubelet[3095]: E0413 23:54:05.634054 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:08.663493 kubelet[3095]: E0413 23:54:08.471605 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.307s" Apr 13 23:54:11.704363 kubelet[3095]: E0413 23:54:11.699947 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:54:12.498329 kubelet[3095]: E0413 23:54:12.497171 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:54:13.285402 kubelet[3095]: E0413 23:54:13.284276 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:23.839003 kubelet[3095]: E0413 23:54:23.665346 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:24.665473 kubelet[3095]: E0413 23:54:24.662479 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.173s" Apr 13 23:54:29.890478 containerd[1599]: time="2026-04-13T23:54:29.828963979Z" level=error msg="failed to handle container TaskExit event container_id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" pid:3444 exit_status:1 exited_at:{seconds:1776124458 nanos:904623991}" error="failed to stop container: context deadline exceeded: unknown" Apr 13 23:54:30.109045 containerd[1599]: time="2026-04-13T23:54:30.036840996Z" level=error msg="ttrpc: received message on inactive stream" stream=21 Apr 13 23:54:30.125087 containerd[1599]: time="2026-04-13T23:54:30.121775214Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 13 23:54:31.542614 containerd[1599]: time="2026-04-13T23:54:31.537818356Z" level=info msg="TaskExit event container_id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" pid:3444 exit_status:1 exited_at:{seconds:1776124458 nanos:904623991}" Apr 13 23:54:32.288278 kubelet[3095]: E0413 23:54:32.286980 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:32.911670 containerd[1599]: time="2026-04-13T23:54:32.642047203Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 13 23:54:33.730779 kubelet[3095]: E0413 23:54:33.707644 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.023s" Apr 13 23:54:33.775299 containerd[1599]: time="2026-04-13T23:54:32.608028822Z" level=error msg="get state for cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809" error="context deadline exceeded: unknown" Apr 13 23:54:33.892654 containerd[1599]: time="2026-04-13T23:54:33.782849046Z" level=warning msg="unknown status" status=0 Apr 13 23:54:35.726032 containerd[1599]: time="2026-04-13T23:54:35.721473871Z" level=error msg="failed to handle container TaskExit event container_id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" pid:3416 exit_status:1 exited_at:{seconds:1776124461 nanos:371515650}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:54:35.941722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809-rootfs.mount: Deactivated successfully. Apr 13 23:54:36.098168 containerd[1599]: time="2026-04-13T23:54:35.771839838Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 13 23:54:36.932572 kubelet[3095]: E0413 23:54:36.916736 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:54:38.608553 kubelet[3095]: E0413 23:54:38.603508 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:42.197562 containerd[1599]: time="2026-04-13T23:54:42.191604967Z" level=error msg="Failed to handle backOff event container_id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" pid:3444 exit_status:1 exited_at:{seconds:1776124458 nanos:904623991} for 1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:54:42.226889 containerd[1599]: time="2026-04-13T23:54:42.223605310Z" level=info msg="TaskExit event container_id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" pid:3416 exit_status:1 exited_at:{seconds:1776124461 nanos:371515650}" Apr 13 23:54:43.073728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698-rootfs.mount: Deactivated successfully. Apr 13 23:54:43.696727 containerd[1599]: time="2026-04-13T23:54:43.690678822Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 13 23:54:45.423085 kubelet[3095]: E0413 23:54:45.395647 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:47.701894 kubelet[3095]: E0413 23:54:47.699463 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.025s" Apr 13 23:54:49.297645 kubelet[3095]: E0413 23:54:49.288418 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:54:51.593783 kubelet[3095]: E0413 23:54:51.588393 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:51.716008 kubelet[3095]: E0413 23:54:51.707018 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.867s" Apr 13 23:54:53.011470 containerd[1599]: time="2026-04-13T23:54:52.973728061Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 13 23:54:53.207731 containerd[1599]: time="2026-04-13T23:54:53.173687749Z" level=error msg="Failed to handle backOff event container_id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" pid:3416 exit_status:1 exited_at:{seconds:1776124461 nanos:371515650} for cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 13 23:54:53.726453 containerd[1599]: time="2026-04-13T23:54:53.719710301Z" level=info msg="TaskExit event container_id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" id:\"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" pid:3444 exit_status:1 exited_at:{seconds:1776124458 nanos:904623991}" Apr 13 23:54:54.405395 kubelet[3095]: E0413 23:54:54.403580 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.659s" Apr 13 23:54:56.336397 kubelet[3095]: E0413 23:54:56.331243 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.345s" Apr 13 23:54:57.104649 kubelet[3095]: E0413 23:54:57.095155 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:54:57.503977 containerd[1599]: time="2026-04-13T23:54:57.487969844Z" level=info msg="shim disconnected" id=1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698 namespace=k8s.io Apr 13 23:54:57.537493 containerd[1599]: time="2026-04-13T23:54:57.530884343Z" level=warning msg="cleaning up after shim disconnected" id=1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698 namespace=k8s.io Apr 13 23:54:57.609050 containerd[1599]: time="2026-04-13T23:54:57.601333156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:54:57.936181 kubelet[3095]: E0413 23:54:57.929069 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.094s" Apr 13 23:55:00.518631 containerd[1599]: time="2026-04-13T23:55:00.517885291Z" level=info msg="TaskExit event container_id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" id:\"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" pid:3416 exit_status:1 exited_at:{seconds:1776124461 nanos:371515650}" Apr 13 23:55:01.166324 kubelet[3095]: E0413 23:55:01.165787 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.281s" Apr 13 23:55:03.283464 kubelet[3095]: E0413 23:55:03.279647 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.112s" Apr 13 23:55:03.311262 kubelet[3095]: E0413 23:55:03.302947 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:05.177252 containerd[1599]: time="2026-04-13T23:55:05.175546602Z" level=info msg="shim disconnected" id=cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809 namespace=k8s.io Apr 13 23:55:05.191675 containerd[1599]: time="2026-04-13T23:55:05.185997556Z" level=warning msg="cleaning up after shim disconnected" id=cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809 namespace=k8s.io Apr 13 23:55:05.198483 containerd[1599]: time="2026-04-13T23:55:05.195845956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:55:05.486845 kubelet[3095]: E0413 23:55:05.482570 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.138s" Apr 13 23:55:06.540202 kubelet[3095]: E0413 23:55:06.529775 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.005s" Apr 13 23:55:06.667602 kubelet[3095]: I0413 23:55:06.660046 3095 scope.go:117] "RemoveContainer" containerID="4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34" Apr 13 23:55:07.044540 kubelet[3095]: I0413 23:55:07.043720 3095 scope.go:117] "RemoveContainer" containerID="1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698" Apr 13 23:55:07.233900 kubelet[3095]: E0413 23:55:07.229325 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:55:07.987084 containerd[1599]: time="2026-04-13T23:55:07.982379232Z" level=info msg="RemoveContainer for \"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\"" Apr 13 23:55:08.459596 containerd[1599]: time="2026-04-13T23:55:08.456529363Z" level=warning msg="cleanup warnings time=\"2026-04-13T23:55:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 23:55:09.689472 containerd[1599]: time="2026-04-13T23:55:09.631905092Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 13 23:55:09.720502 kubelet[3095]: E0413 23:55:09.641706 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:09.895599 containerd[1599]: time="2026-04-13T23:55:09.708964257Z" level=info msg="RemoveContainer for \"4a7aadd8c427d0b70216495f2fdcb52522da5d0d4f51e4c07c8cb1fd32202b34\" returns successfully" Apr 13 23:55:12.226486 kubelet[3095]: E0413 23:55:12.213712 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.197s" Apr 13 23:55:12.521098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322475968.mount: Deactivated successfully. Apr 13 23:55:13.182680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1600425854.mount: Deactivated successfully. Apr 13 23:55:13.401541 containerd[1599]: time="2026-04-13T23:55:13.398395989Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544\"" Apr 13 23:55:13.881794 containerd[1599]: time="2026-04-13T23:55:13.881039864Z" level=info msg="StartContainer for \"ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544\"" Apr 13 23:55:14.225350 kubelet[3095]: E0413 23:55:14.217173 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.938s" Apr 13 23:55:15.297514 kubelet[3095]: E0413 23:55:15.290657 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:16.194522 kubelet[3095]: I0413 23:55:16.193998 3095 scope.go:117] "RemoveContainer" containerID="ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff" Apr 13 23:55:16.368409 kubelet[3095]: I0413 23:55:16.365627 3095 scope.go:117] "RemoveContainer" containerID="cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809" Apr 13 23:55:16.516491 kubelet[3095]: E0413 23:55:16.489077 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:55:18.174459 containerd[1599]: time="2026-04-13T23:55:18.164522294Z" level=info msg="RemoveContainer for \"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\"" Apr 13 23:55:19.728527 kubelet[3095]: E0413 23:55:19.714620 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.878s" Apr 13 23:55:19.800027 containerd[1599]: time="2026-04-13T23:55:19.757007567Z" level=info msg="RemoveContainer for \"ea12890a6750bf38451a450ea2d9b900ecae8536bf34d2451178d79ac47b26ff\" returns successfully" Apr 13 23:55:20.804544 kubelet[3095]: E0413 23:55:20.800627 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:21.212545 containerd[1599]: time="2026-04-13T23:55:21.175947732Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 13 23:55:22.207574 kubelet[3095]: E0413 23:55:22.204236 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.399s" Apr 13 23:55:24.641603 kubelet[3095]: E0413 23:55:24.591620 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.432s" Apr 13 23:55:25.328626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217832252.mount: Deactivated successfully. Apr 13 23:55:27.256784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836416173.mount: Deactivated successfully. Apr 13 23:55:27.782550 kubelet[3095]: E0413 23:55:27.729982 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:28.387342 containerd[1599]: time="2026-04-13T23:55:28.386895910Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca\"" Apr 13 23:55:28.717044 kubelet[3095]: E0413 23:55:28.694262 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.889s" Apr 13 23:55:29.035940 containerd[1599]: time="2026-04-13T23:55:29.031327861Z" level=info msg="StartContainer for \"ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca\"" Apr 13 23:55:29.817325 kubelet[3095]: E0413 23:55:29.815095 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.027s" Apr 13 23:55:32.826034 kubelet[3095]: E0413 23:55:32.810638 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.881s" Apr 13 23:55:34.761644 kubelet[3095]: E0413 23:55:34.714421 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:36.123632 kubelet[3095]: E0413 23:55:36.111710 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.282s" Apr 13 23:55:39.253445 kubelet[3095]: E0413 23:55:39.243572 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.05s" Apr 13 23:55:39.937519 kubelet[3095]: E0413 23:55:39.930915 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:55:41.583462 kubelet[3095]: E0413 23:55:41.580905 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:41.715921 kubelet[3095]: E0413 23:55:41.693204 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.426s" Apr 13 23:55:43.877613 containerd[1599]: time="2026-04-13T23:55:43.816614881Z" level=info msg="StartContainer for \"ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544\" returns successfully" Apr 13 23:55:45.411605 kubelet[3095]: E0413 23:55:45.396800 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.675s" Apr 13 23:55:55.942485 kubelet[3095]: E0413 23:55:55.479802 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:55:58.237541 kubelet[3095]: I0413 23:55:58.034365 3095 scope.go:117] "RemoveContainer" containerID="cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809" Apr 13 23:56:04.544571 kubelet[3095]: E0413 23:56:04.390638 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.652s" Apr 13 23:56:05.057564 kubelet[3095]: E0413 23:56:05.054057 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:56:05.838213 containerd[1599]: time="2026-04-13T23:56:05.837366164Z" level=info msg="RemoveContainer for \"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\"" Apr 13 23:56:06.275590 kubelet[3095]: E0413 23:56:06.229632 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.439s" Apr 13 23:56:06.284736 containerd[1599]: time="2026-04-13T23:56:06.237796396Z" level=info msg="RemoveContainer for \"cd790fb60deb6ee0f27a3fc48efb1cb6b122c622cb4278fb8b90f4a2eaa5d809\" returns successfully" Apr 13 23:56:06.469582 kubelet[3095]: E0413 23:56:06.468362 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:56:08.190032 containerd[1599]: time="2026-04-13T23:56:08.188361169Z" level=info msg="StartContainer for \"ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca\" returns successfully" Apr 13 23:56:12.700525 kubelet[3095]: E0413 23:56:12.671992 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:56:16.799455 kubelet[3095]: E0413 23:56:16.796673 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.556s" Apr 13 23:56:17.940540 kubelet[3095]: E0413 23:56:17.934818 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:56:19.193838 kubelet[3095]: E0413 23:56:19.192021 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:56:21.299867 kubelet[3095]: E0413 23:56:21.278502 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:56:26.494596 kubelet[3095]: E0413 23:56:26.442360 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:56:32.477594 kubelet[3095]: E0413 23:56:31.319009 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:56:35.231633 kubelet[3095]: E0413 23:56:35.209553 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.921s" Apr 13 23:56:45.180558 kubelet[3095]: E0413 23:56:45.142621 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:56:53.401427 kubelet[3095]: E0413 23:56:53.339598 3095 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 13 23:56:54.453216 kubelet[3095]: E0413 23:56:54.286834 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:02.928051 kubelet[3095]: E0413 23:57:02.449587 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:06.897697 kubelet[3095]: E0413 23:57:06.893560 3095 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 13 23:57:07.036350 kubelet[3095]: E0413 23:57:07.030834 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.676s" Apr 13 23:57:07.951816 kubelet[3095]: E0413 23:57:07.525003 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:08.393280 kubelet[3095]: E0413 23:57:08.391698 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:10.819379 kubelet[3095]: E0413 23:57:10.816247 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:13.227694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544-rootfs.mount: Deactivated successfully. Apr 13 23:57:13.329615 containerd[1599]: time="2026-04-13T23:57:13.312046074Z" level=info msg="shim disconnected" id=ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544 namespace=k8s.io Apr 13 23:57:13.384536 kubelet[3095]: E0413 23:57:13.343003 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.221s" Apr 13 23:57:13.406512 containerd[1599]: time="2026-04-13T23:57:13.398649434Z" level=warning msg="cleaning up after shim disconnected" id=ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544 namespace=k8s.io Apr 13 23:57:13.423465 containerd[1599]: time="2026-04-13T23:57:13.402946958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:57:13.763583 kubelet[3095]: E0413 23:57:13.762707 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:15.188493 containerd[1599]: time="2026-04-13T23:57:15.182686533Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544 Apr 13 23:57:15.231454 kubelet[3095]: E0413 23:57:15.214304 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:16.880309 kubelet[3095]: E0413 23:57:16.812933 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:16.957507 containerd[1599]: time="2026-04-13T23:57:16.911689976Z" level=warning msg="cleanup warnings time=\"2026-04-13T23:57:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 1: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544/log.json: no such file or directory\\n\\nNAME:\\n runc - Open Container Initiative runtime\\n\\nrunc is a command line client for running applications packaged according to\\nthe Open Container Initiative (OCI) format and is a compliant implementation of the\\nOpen Container Initiative specification.\\n\\nrunc integrates well with existing process supervisors to provide a production\\ncontainer runtime environment for applications. It can be used with your\\nexisting process monitoring tools and the container will be spawned as a\\ndirect child of the process supervisor.\\n\\nContainers are configured using bundles. A bundle for a container is a directory\\nthat includes a specification file named \\\"config.json\\\" and a root filesystem.\\nThe root filesystem contains the contents of the container.\\n\\nTo start a new instance of a container:\\n\\n # runc run [ -b bundle ] \\n\\nWhere \\\"\\\" is your name for the instance of the container that you\\nare starting. The name you provide for the container instance must be unique on\\nyour host. Providing the bundle directory using \\\"-b\\\" is optional. The default\\nvalue for \\\"bundle\\\" is the current directory.\\n\\nUSAGE:\\n runc [global options] command [command options] [arguments...]\\n\\nVERSION:\\n 1.1.13\\ncommit: 58aa9203c123022138b22cf96540c284876a7910\\nspec: 1.0.2-dev\\ngo: go1.21.13\\nlibseccomp: 2.5.5\\n\\nCOMMANDS:\\n checkpoint checkpoint a running container\\n create create a container\\n delete delete any resources held by the container often used with detached container\\n events display container events such as OOM notifications, cpu, memory, and IO usage statistics\\n exec execute new process inside the container\\n kill kill sends the specified signal (default: SIGTERM) to the container's init process\\n list lists containers started by runc with the given root\\n pause pause suspends all processes inside the container\\n ps ps displays the processes running inside a container\\n restore restore a container from a previous checkpoint\\n resume resumes all processes that have been previously paused\\n run create and run a container\\n spec create a new specification file\\n start executes the user defined process in a created container\\n state output the state of a container\\n update update container resource constraints\\n features show the enabled features\\n help, h Shows a list of commands or help for one command\\n\\nGLOBAL OPTIONS:\\n --debug enable debug logging\\n --log value set the log file to write runc logs to (default is '/dev/stderr')\\n --log-format value set the log format ('text' (default), or 'json') (default: \\\"text\\\")\\n --root value root directory for storage of container state (this should be located in tmpfs) (default: \\\"/run/runc\\\")\\n --criu value path to the criu binary used for checkpoint and restore (default: \\\"criu\\\")\\n --systemd-cgroup enable systemd cgroup support, expects cgroupsPath to be of form \\\"slice:prefix:name\\\" for e.g. \\\"system.slice:runc:434234\\\"\\n --rootless value ignore cgroup permission errors ('true', 'false', or 'auto') (default: \\\"auto\\\")\\n --help, -h show help\\n --version, -v print the version\\n{\\\"level\\\":\\\"error\\\",\\\"msg\\\":\\\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544/log.json: no such file or directory\\\",\\\"time\\\":\\\"2026-04-13T23:57:16Z\\\"}\\n\" runtime=io.containerd.runc.v2\ntime=\"2026-04-13T23:57:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 23:57:17.259711 kubelet[3095]: E0413 23:57:17.229956 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.305s" Apr 13 23:57:17.858202 kubelet[3095]: I0413 23:57:17.857931 3095 scope.go:117] "RemoveContainer" containerID="1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698" Apr 13 23:57:17.862576 kubelet[3095]: I0413 23:57:17.862446 3095 scope.go:117] "RemoveContainer" containerID="ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544" Apr 13 23:57:17.869266 kubelet[3095]: E0413 23:57:17.866972 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:17.880762 kubelet[3095]: E0413 23:57:17.875817 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 13 23:57:18.231915 containerd[1599]: time="2026-04-13T23:57:18.219665794Z" level=info msg="RemoveContainer for \"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\"" Apr 13 23:57:18.690690 containerd[1599]: time="2026-04-13T23:57:18.686506987Z" level=info msg="RemoveContainer for \"1b30f6be3acbb87b5c9223e0ebeff4ce56e1f98ad7bc65e46e3f221954e59698\" returns successfully" Apr 13 23:57:19.839742 kubelet[3095]: E0413 23:57:19.820249 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.014s" Apr 13 23:57:21.548389 kubelet[3095]: I0413 23:57:21.536597 3095 scope.go:117] "RemoveContainer" containerID="ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544" Apr 13 23:57:21.818624 kubelet[3095]: E0413 23:57:21.793998 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:22.380921 kubelet[3095]: E0413 23:57:22.380307 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 13 23:57:22.611559 kubelet[3095]: E0413 23:57:22.607573 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:23.510644 kubelet[3095]: E0413 23:57:23.499650 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.596s" Apr 13 23:57:27.838706 kubelet[3095]: E0413 23:57:27.832612 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:33.938504 kubelet[3095]: I0413 23:57:33.936078 3095 scope.go:117] "RemoveContainer" containerID="ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544" Apr 13 23:57:34.405561 kubelet[3095]: E0413 23:57:34.384523 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:34.455712 kubelet[3095]: E0413 23:57:34.445677 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:35.289427 kubelet[3095]: E0413 23:57:35.287871 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 13 23:57:36.932993 kubelet[3095]: E0413 23:57:36.931560 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.907s" Apr 13 23:57:39.076986 kubelet[3095]: E0413 23:57:39.076370 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.14s" Apr 13 23:57:40.689041 kubelet[3095]: E0413 23:57:40.687539 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:40.894591 kubelet[3095]: E0413 23:57:40.883516 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.781s" Apr 13 23:57:45.397037 kubelet[3095]: E0413 23:57:45.358476 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.307s" Apr 13 23:57:47.349705 kubelet[3095]: E0413 23:57:47.345448 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:49.915581 kubelet[3095]: E0413 23:57:49.844594 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.343s" Apr 13 23:57:50.913732 kubelet[3095]: I0413 23:57:50.905872 3095 scope.go:117] "RemoveContainer" containerID="ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544" Apr 13 23:57:51.201623 kubelet[3095]: E0413 23:57:51.193237 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:57:52.713582 kubelet[3095]: E0413 23:57:52.706468 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.79s" Apr 13 23:57:53.392499 kubelet[3095]: E0413 23:57:53.388488 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:53.831475 containerd[1599]: time="2026-04-13T23:57:53.826613353Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 13 23:57:58.596481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970887511.mount: Deactivated successfully. Apr 13 23:57:59.229617 containerd[1599]: time="2026-04-13T23:57:59.218680262Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\"" Apr 13 23:57:59.418648 kubelet[3095]: E0413 23:57:59.411789 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:57:59.443369 kubelet[3095]: E0413 23:57:59.440592 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.3s" Apr 13 23:58:00.508566 containerd[1599]: time="2026-04-13T23:58:00.507678659Z" level=info msg="StartContainer for \"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\"" Apr 13 23:58:03.233402 kubelet[3095]: E0413 23:58:03.220517 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.709s" Apr 13 23:58:06.530399 kubelet[3095]: E0413 23:58:06.527239 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:11.137548 kubelet[3095]: E0413 23:58:11.134728 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.812s" Apr 13 23:58:14.376025 kubelet[3095]: E0413 23:58:14.372720 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:15.475096 kubelet[3095]: I0413 23:58:15.473667 3095 scope.go:117] "RemoveContainer" containerID="ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544" Apr 13 23:58:18.544308 containerd[1599]: time="2026-04-13T23:58:18.512475372Z" level=info msg="RemoveContainer for \"ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544\"" Apr 13 23:58:19.589533 kubelet[3095]: E0413 23:58:19.588269 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.428s" Apr 13 23:58:19.802656 containerd[1599]: time="2026-04-13T23:58:19.778703381Z" level=info msg="RemoveContainer for \"ead280afd5952b42b400cab3753c0f0eec70c996651309c719b7c04417445544\" returns successfully" Apr 13 23:58:21.396244 kubelet[3095]: E0413 23:58:21.393858 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:23.560500 containerd[1599]: time="2026-04-13T23:58:23.552336316Z" level=info msg="StartContainer for \"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" returns successfully" Apr 13 23:58:25.689515 kubelet[3095]: E0413 23:58:25.682235 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.091s" Apr 13 23:58:27.177714 kubelet[3095]: E0413 23:58:27.170513 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:28.574064 kubelet[3095]: E0413 23:58:28.486338 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.491s" Apr 13 23:58:32.784420 kubelet[3095]: E0413 23:58:32.779364 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:36.310011 kubelet[3095]: E0413 23:58:36.305696 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:40.487703 kubelet[3095]: E0413 23:58:40.409578 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.686s" Apr 13 23:58:41.941016 kubelet[3095]: E0413 23:58:41.937843 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:42.099237 kubelet[3095]: E0413 23:58:42.004087 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:42.873200 kubelet[3095]: E0413 23:58:42.844381 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.937s" Apr 13 23:58:43.388419 kubelet[3095]: E0413 23:58:43.387135 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:58:50.214626 kubelet[3095]: E0413 23:58:50.203577 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:58:56.216584 kubelet[3095]: E0413 23:58:56.202525 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.071s" Apr 13 23:58:57.285932 kubelet[3095]: E0413 23:58:57.277422 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:04.916558 kubelet[3095]: E0413 23:59:04.906274 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:06.483482 kubelet[3095]: E0413 23:59:05.748075 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:16.205637 kubelet[3095]: E0413 23:59:16.192418 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.498s" Apr 13 23:59:17.586426 kubelet[3095]: E0413 23:59:17.572290 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:19.894218 kubelet[3095]: E0413 23:59:19.891588 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:23.485519 kubelet[3095]: E0413 23:59:23.370585 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.652s" Apr 13 23:59:25.809001 kubelet[3095]: E0413 23:59:25.616593 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:34.912232 kubelet[3095]: E0413 23:59:34.892051 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:43.324691 kubelet[3095]: E0413 23:59:42.536065 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:47.609073 kubelet[3095]: E0413 23:59:47.531023 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.987s" Apr 13 23:59:48.609713 kubelet[3095]: E0413 23:59:48.582868 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:59:51.181392 kubelet[3095]: E0413 23:59:50.978874 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:54.455326 kubelet[3095]: E0413 23:59:54.423851 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.738s" Apr 13 23:59:56.684137 kubelet[3095]: E0413 23:59:56.682701 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.171s" Apr 13 23:59:56.941998 kubelet[3095]: E0413 23:59:56.930563 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:59:59.685093 kubelet[3095]: E0413 23:59:59.657732 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.918s" Apr 14 00:00:03.816900 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 14 00:00:05.828452 systemd[1]: logrotate.service: Deactivated successfully. Apr 14 00:00:06.052398 kubelet[3095]: W0414 00:00:06.050854 3095 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/logrotate.service: no such file or directory Apr 14 00:00:06.068038 kubelet[3095]: W0414 00:00:06.066902 3095 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/logrotate.service: no such file or directory Apr 14 00:00:07.797533 kubelet[3095]: W0414 00:00:07.605455 3095 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/logrotate.service: no such file or directory Apr 14 00:00:07.839798 kubelet[3095]: W0414 00:00:07.802686 3095 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/logrotate.service: no such file or directory Apr 14 00:00:07.839798 kubelet[3095]: W0414 00:00:07.804742 3095 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/logrotate.service: no such file or directory Apr 14 00:00:10.242801 kubelet[3095]: E0414 00:00:09.969662 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:14.990314 kubelet[3095]: E0414 00:00:14.987021 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.161s" Apr 14 00:00:15.991528 kubelet[3095]: E0414 00:00:15.979331 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:16.551471 kubelet[3095]: E0414 00:00:16.539791 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:16.608491 kubelet[3095]: E0414 00:00:16.604423 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.58s" Apr 14 00:00:20.426578 kubelet[3095]: E0414 00:00:20.227639 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.31s" Apr 14 00:00:23.718195 kubelet[3095]: E0414 00:00:23.679891 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:29.040441 kubelet[3095]: E0414 00:00:29.011060 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.509s" Apr 14 00:00:29.636690 kubelet[3095]: E0414 00:00:29.619296 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:30.300613 kubelet[3095]: E0414 00:00:30.190723 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.178s" Apr 14 00:00:32.985361 kubelet[3095]: E0414 00:00:32.974850 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.934s" Apr 14 00:00:36.620030 kubelet[3095]: E0414 00:00:36.569731 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:38.320584 kubelet[3095]: E0414 00:00:38.317940 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.279s" Apr 14 00:00:45.322705 kubelet[3095]: E0414 00:00:45.313544 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:00:50.176453 kubelet[3095]: E0414 00:00:50.175310 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.661s" Apr 14 00:00:51.843689 kubelet[3095]: E0414 00:00:51.513634 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:53.795772 kubelet[3095]: E0414 00:00:53.786745 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:00.795376 kubelet[3095]: E0414 00:01:00.675425 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.274s" Apr 14 00:01:06.133702 kubelet[3095]: E0414 00:01:06.131742 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:09.241956 kubelet[3095]: E0414 00:01:09.226689 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.42s" Apr 14 00:01:10.072831 kubelet[3095]: E0414 00:01:10.072128 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:10.928562 kubelet[3095]: E0414 00:01:10.925429 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.575s" Apr 14 00:01:11.973842 containerd[1599]: time="2026-04-14T00:01:11.965841264Z" level=error msg="failed to handle container TaskExit event container_id:\"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" id:\"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" pid:3718 exit_status:1 exited_at:{seconds:1776124861 nanos:147485119}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 00:01:12.132989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158-rootfs.mount: Deactivated successfully. Apr 14 00:01:12.176775 containerd[1599]: time="2026-04-14T00:01:12.133889308Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 14 00:01:12.257968 kubelet[3095]: E0414 00:01:12.253517 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:12.442531 kubelet[3095]: E0414 00:01:12.440439 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.508s" Apr 14 00:01:12.578093 kubelet[3095]: E0414 00:01:12.539375 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:13.796832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca-rootfs.mount: Deactivated successfully. Apr 14 00:01:13.923805 containerd[1599]: time="2026-04-14T00:01:13.800339372Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 14 00:01:13.923805 containerd[1599]: time="2026-04-14T00:01:13.810127978Z" level=info msg="TaskExit event container_id:\"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" id:\"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" pid:3718 exit_status:1 exited_at:{seconds:1776124861 nanos:147485119}" Apr 14 00:01:14.017935 containerd[1599]: time="2026-04-14T00:01:13.940831789Z" level=error msg="failed to shutdown shim task and the shim might be leaked" error="context deadline exceeded: unknown" id=ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca Apr 14 00:01:14.198059 containerd[1599]: time="2026-04-14T00:01:14.191664446Z" level=info msg="shim disconnected" id=ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca namespace=k8s.io Apr 14 00:01:14.230091 containerd[1599]: time="2026-04-14T00:01:14.213548749Z" level=warning msg="cleaning up after shim disconnected" id=ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca namespace=k8s.io Apr 14 00:01:14.334829 containerd[1599]: time="2026-04-14T00:01:14.269943071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:01:14.407526 containerd[1599]: time="2026-04-14T00:01:14.395601258Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca Apr 14 00:01:14.414783 containerd[1599]: time="2026-04-14T00:01:14.395608047Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca delete" error="fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" namespace=k8s.io Apr 14 00:01:14.414783 containerd[1599]: time="2026-04-14T00:01:14.412903607Z" level=warning msg="failed to clean up after shim disconnected" error=": fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" id=ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca namespace=k8s.io Apr 14 00:01:14.985672 kubelet[3095]: E0414 00:01:14.980911 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.149s" Apr 14 00:01:15.845739 kubelet[3095]: I0414 00:01:15.812319 3095 scope.go:117] "RemoveContainer" containerID="ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca" Apr 14 00:01:15.845739 kubelet[3095]: E0414 00:01:15.815974 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:15.845739 kubelet[3095]: E0414 00:01:15.837744 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:01:18.416416 kubelet[3095]: E0414 00:01:18.387844 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:18.814278 containerd[1599]: time="2026-04-14T00:01:18.813368059Z" level=info msg="shim disconnected" id=895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158 namespace=k8s.io Apr 14 00:01:18.843554 containerd[1599]: time="2026-04-14T00:01:18.824672575Z" level=warning msg="cleaning up after shim disconnected" id=895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158 namespace=k8s.io Apr 14 00:01:18.857247 containerd[1599]: time="2026-04-14T00:01:18.851721324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:01:19.012363 kubelet[3095]: E0414 00:01:19.011384 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.199s" Apr 14 00:01:21.513234 kubelet[3095]: I0414 00:01:21.512440 3095 scope.go:117] "RemoveContainer" containerID="ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca" Apr 14 00:01:21.562690 kubelet[3095]: E0414 00:01:21.555632 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:21.702813 kubelet[3095]: E0414 00:01:21.691884 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:01:23.389854 kubelet[3095]: I0414 00:01:23.388453 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:01:23.420803 kubelet[3095]: E0414 00:01:23.416886 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:23.425975 kubelet[3095]: E0414 00:01:23.424356 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:01:23.694595 kubelet[3095]: E0414 00:01:23.686991 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:28.813199 kubelet[3095]: E0414 00:01:28.809000 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:29.693286 kubelet[3095]: I0414 00:01:29.691509 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:01:29.780724 kubelet[3095]: E0414 00:01:29.775129 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:29.929586 kubelet[3095]: E0414 00:01:29.925444 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:01:34.222458 kubelet[3095]: E0414 00:01:34.201368 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:36.091692 kubelet[3095]: I0414 00:01:36.091020 3095 scope.go:117] "RemoveContainer" containerID="ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca" Apr 14 00:01:36.117284 kubelet[3095]: E0414 00:01:36.094938 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:36.187652 kubelet[3095]: E0414 00:01:36.182710 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:01:39.390361 kubelet[3095]: E0414 00:01:39.388899 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:40.837095 kubelet[3095]: I0414 00:01:40.836310 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:01:40.860799 kubelet[3095]: E0414 00:01:40.837850 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:40.860799 kubelet[3095]: E0414 00:01:40.843950 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:01:44.625525 kubelet[3095]: E0414 00:01:44.621662 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:51.014569 kubelet[3095]: I0414 00:01:50.726831 3095 scope.go:117] "RemoveContainer" containerID="ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca" Apr 14 00:01:51.111667 kubelet[3095]: E0414 00:01:51.109630 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:51.236532 kubelet[3095]: E0414 00:01:51.105585 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:52.416153 containerd[1599]: time="2026-04-14T00:01:52.412849557Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 14 00:01:53.512401 containerd[1599]: time="2026-04-14T00:01:53.509175894Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\"" Apr 14 00:01:53.689568 containerd[1599]: time="2026-04-14T00:01:53.686232167Z" level=info msg="StartContainer for \"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\"" Apr 14 00:01:54.084953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173012441.mount: Deactivated successfully. Apr 14 00:01:55.248294 kubelet[3095]: I0414 00:01:55.247637 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:01:55.338630 kubelet[3095]: E0414 00:01:55.309973 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:55.403493 kubelet[3095]: E0414 00:01:55.401200 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:01:56.217246 kubelet[3095]: E0414 00:01:56.214528 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.391s" Apr 14 00:01:56.615971 kubelet[3095]: E0414 00:01:56.544871 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:01:57.996438 kubelet[3095]: E0414 00:01:57.988646 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.07s" Apr 14 00:02:02.243456 kubelet[3095]: E0414 00:02:02.227133 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:05.818530 kubelet[3095]: E0414 00:02:05.811988 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1s" Apr 14 00:02:09.219587 kubelet[3095]: E0414 00:02:09.212477 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:09.543872 kubelet[3095]: E0414 00:02:09.526983 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.704s" Apr 14 00:02:10.694615 kubelet[3095]: E0414 00:02:10.683398 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.109s" Apr 14 00:02:10.797629 kubelet[3095]: I0414 00:02:10.796486 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:02:10.811868 kubelet[3095]: E0414 00:02:10.810721 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:10.827940 kubelet[3095]: E0414 00:02:10.815057 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:02:14.616495 kubelet[3095]: E0414 00:02:14.606955 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:14.669178 containerd[1599]: time="2026-04-14T00:02:14.666962980Z" level=info msg="StartContainer for \"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\" returns successfully" Apr 14 00:02:16.602521 kubelet[3095]: E0414 00:02:16.601985 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.795s" Apr 14 00:02:18.607654 kubelet[3095]: E0414 00:02:18.562941 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.766s" Apr 14 00:02:20.596695 kubelet[3095]: E0414 00:02:20.590986 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:21.940622 kubelet[3095]: E0414 00:02:21.814634 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.875s" Apr 14 00:02:22.309055 kubelet[3095]: E0414 00:02:22.305143 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:23.314633 kubelet[3095]: E0414 00:02:23.283028 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.26s" Apr 14 00:02:24.542649 kubelet[3095]: E0414 00:02:24.540619 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.093s" Apr 14 00:02:24.944566 kubelet[3095]: I0414 00:02:24.936449 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:02:25.423062 kubelet[3095]: E0414 00:02:25.422630 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:25.454599 kubelet[3095]: E0414 00:02:25.452564 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:26.960621 kubelet[3095]: E0414 00:02:26.954089 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:27.838619 kubelet[3095]: E0414 00:02:27.835675 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.814s" Apr 14 00:02:27.906626 containerd[1599]: time="2026-04-14T00:02:27.836061276Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:6,}" Apr 14 00:02:29.376344 kubelet[3095]: E0414 00:02:29.373870 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.426s" Apr 14 00:02:30.858286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406580395.mount: Deactivated successfully. Apr 14 00:02:31.964490 kubelet[3095]: E0414 00:02:31.919002 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.535s" Apr 14 00:02:32.040050 containerd[1599]: time="2026-04-14T00:02:32.038670165Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:6,} returns container id \"da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7\"" Apr 14 00:02:33.118622 containerd[1599]: time="2026-04-14T00:02:33.040893272Z" level=info msg="StartContainer for \"da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7\"" Apr 14 00:02:33.233489 kubelet[3095]: E0414 00:02:33.212580 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:33.535841 kubelet[3095]: E0414 00:02:33.518618 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:33.779493 kubelet[3095]: E0414 00:02:33.777876 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.365s" Apr 14 00:02:34.799882 kubelet[3095]: E0414 00:02:34.788071 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:35.488468 kubelet[3095]: E0414 00:02:35.445571 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.665s" Apr 14 00:02:39.299399 kubelet[3095]: E0414 00:02:39.291947 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:41.241742 kubelet[3095]: E0414 00:02:41.237726 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.41s" Apr 14 00:02:43.667643 kubelet[3095]: E0414 00:02:43.659485 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.274s" Apr 14 00:02:45.711534 kubelet[3095]: E0414 00:02:45.704645 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:46.325444 kubelet[3095]: E0414 00:02:46.321706 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.608s" Apr 14 00:02:46.809371 kubelet[3095]: I0414 00:02:46.805449 3095 scope.go:117] "RemoveContainer" containerID="895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158" Apr 14 00:02:47.514683 containerd[1599]: time="2026-04-14T00:02:47.443267952Z" level=info msg="StartContainer for \"da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7\" returns successfully" Apr 14 00:02:49.228226 kubelet[3095]: E0414 00:02:49.162786 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.818s" Apr 14 00:02:50.204646 containerd[1599]: time="2026-04-14T00:02:50.191767422Z" level=info msg="RemoveContainer for \"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\"" Apr 14 00:02:51.323592 containerd[1599]: time="2026-04-14T00:02:51.228998418Z" level=info msg="RemoveContainer for \"895c625d9e84f5b57cd16dfec78e3aa5dd595c69ba49dc11482002c154dd2158\" returns successfully" Apr 14 00:02:52.181636 kubelet[3095]: E0414 00:02:52.173242 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:59.777616 kubelet[3095]: E0414 00:02:59.741640 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:01.273915 kubelet[3095]: E0414 00:03:01.272532 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.709s" Apr 14 00:03:02.029215 kubelet[3095]: E0414 00:03:02.027921 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:03.542787 kubelet[3095]: E0414 00:03:03.536577 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.049s" Apr 14 00:03:03.819443 kubelet[3095]: E0414 00:03:03.795057 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:05.431795 kubelet[3095]: E0414 00:03:05.418011 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:05.822955 kubelet[3095]: E0414 00:03:05.815032 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 14 00:03:05.863249 kubelet[3095]: E0414 00:03:05.862895 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:10.384534 kubelet[3095]: E0414 00:03:10.377026 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.42s" Apr 14 00:03:11.391519 kubelet[3095]: E0414 00:03:11.344587 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:11.541960 kubelet[3095]: E0414 00:03:11.519901 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:12.229605 kubelet[3095]: E0414 00:03:12.165722 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.322s" Apr 14 00:03:14.196812 kubelet[3095]: E0414 00:03:14.195905 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Apr 14 00:03:14.211540 kubelet[3095]: E0414 00:03:14.203540 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:16.427395 kubelet[3095]: E0414 00:03:16.426829 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.601s" Apr 14 00:03:17.309778 kubelet[3095]: E0414 00:03:17.301085 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:18.636735 kubelet[3095]: E0414 00:03:18.634922 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.82s" Apr 14 00:03:23.153591 kubelet[3095]: E0414 00:03:23.133849 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.099s" Apr 14 00:03:23.856799 kubelet[3095]: E0414 00:03:23.726496 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:26.209456 kubelet[3095]: E0414 00:03:26.200862 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.032s" Apr 14 00:03:28.402785 kubelet[3095]: E0414 00:03:28.334749 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.943s" Apr 14 00:03:29.511546 kubelet[3095]: E0414 00:03:29.490156 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:30.580820 kubelet[3095]: E0414 00:03:30.413089 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.991s" Apr 14 00:03:33.176672 kubelet[3095]: E0414 00:03:33.167778 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.469s" Apr 14 00:03:34.821665 kubelet[3095]: E0414 00:03:34.664966 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.495s" Apr 14 00:03:35.196370 kubelet[3095]: E0414 00:03:35.175949 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:36.416813 kubelet[3095]: E0414 00:03:36.415190 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.612s" Apr 14 00:03:41.316408 kubelet[3095]: E0414 00:03:41.312950 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:47.665699 kubelet[3095]: E0414 00:03:47.596692 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:51.215798 kubelet[3095]: E0414 00:03:51.210392 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.266s" Apr 14 00:03:54.076520 kubelet[3095]: E0414 00:03:54.073546 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:03:59.219962 kubelet[3095]: E0414 00:03:59.195029 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.979s" Apr 14 00:04:04.245481 kubelet[3095]: E0414 00:04:04.234740 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:11.096614 kubelet[3095]: E0414 00:04:11.088020 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:11.974960 kubelet[3095]: E0414 00:04:11.974475 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.768s" Apr 14 00:04:12.444693 kubelet[3095]: E0414 00:04:12.436706 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:12.682951 kubelet[3095]: E0414 00:04:12.680723 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:14.143695 kubelet[3095]: E0414 00:04:14.135661 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.012s" Apr 14 00:04:15.617056 kubelet[3095]: E0414 00:04:15.600755 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.392s" Apr 14 00:04:16.704633 kubelet[3095]: E0414 00:04:16.686991 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:17.510540 kubelet[3095]: E0414 00:04:17.509528 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.887s" Apr 14 00:04:18.750601 kubelet[3095]: E0414 00:04:18.744890 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.228s" Apr 14 00:04:19.233939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7-rootfs.mount: Deactivated successfully. Apr 14 00:04:19.400905 containerd[1599]: time="2026-04-14T00:04:19.271095006Z" level=info msg="shim disconnected" id=da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 namespace=k8s.io Apr 14 00:04:19.588130 containerd[1599]: time="2026-04-14T00:04:19.570999437Z" level=warning msg="cleaning up after shim disconnected" id=da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 namespace=k8s.io Apr 14 00:04:19.717644 containerd[1599]: time="2026-04-14T00:04:19.596689377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:04:20.197405 containerd[1599]: time="2026-04-14T00:04:20.185018789Z" level=error msg="failed to handle container TaskExit event container_id:\"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\" id:\"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\" pid:3823 exit_status:1 exited_at:{seconds:1776125049 nanos:764999176}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 00:04:20.289643 containerd[1599]: time="2026-04-14T00:04:20.195836816Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 Apr 14 00:04:20.365335 containerd[1599]: time="2026-04-14T00:04:20.345091585Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 14 00:04:20.406030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16-rootfs.mount: Deactivated successfully. Apr 14 00:04:21.383624 containerd[1599]: time="2026-04-14T00:04:21.377722938Z" level=info msg="TaskExit event container_id:\"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\" id:\"4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16\" pid:3823 exit_status:1 exited_at:{seconds:1776125049 nanos:764999176}" Apr 14 00:04:23.244488 kubelet[3095]: E0414 00:04:23.208662 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:23.778706 containerd[1599]: time="2026-04-14T00:04:23.766835937Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 delete" error="exit status 1" namespace=k8s.io Apr 14 00:04:23.797974 containerd[1599]: time="2026-04-14T00:04:23.772090129Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7 namespace=k8s.io Apr 14 00:04:23.941443 kubelet[3095]: E0414 00:04:23.624392 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.386s" Apr 14 00:04:26.667592 kubelet[3095]: E0414 00:04:26.658072 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.69s" Apr 14 00:04:27.934645 kubelet[3095]: E0414 00:04:27.922802 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.237s" Apr 14 00:04:28.219258 kubelet[3095]: I0414 00:04:28.215719 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:04:28.334000 kubelet[3095]: E0414 00:04:28.238390 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:28.500528 kubelet[3095]: E0414 00:04:28.493769 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:04:29.265551 kubelet[3095]: E0414 00:04:29.139329 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:29.799419 kubelet[3095]: E0414 00:04:29.797980 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Apr 14 00:04:30.021057 containerd[1599]: time="2026-04-14T00:04:30.001865856Z" level=info msg="shim disconnected" id=4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 namespace=k8s.io Apr 14 00:04:30.189155 containerd[1599]: time="2026-04-14T00:04:30.089993881Z" level=warning msg="cleaning up after shim disconnected" id=4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 namespace=k8s.io Apr 14 00:04:30.206614 containerd[1599]: time="2026-04-14T00:04:30.198083399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:04:31.334786 kubelet[3095]: E0414 00:04:31.322666 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.477s" Apr 14 00:04:31.737825 containerd[1599]: time="2026-04-14T00:04:31.727628476Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 Apr 14 00:04:32.415586 kubelet[3095]: I0414 00:04:32.413554 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:04:32.478602 kubelet[3095]: E0414 00:04:32.475016 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:32.777939 kubelet[3095]: E0414 00:04:32.699487 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:04:33.094601 kubelet[3095]: E0414 00:04:33.084051 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.679s" Apr 14 00:04:34.372519 containerd[1599]: time="2026-04-14T00:04:34.301078128Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 delete" error="exit status 1" namespace=k8s.io Apr 14 00:04:34.404973 containerd[1599]: time="2026-04-14T00:04:34.389805111Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16 namespace=k8s.io Apr 14 00:04:34.512429 kubelet[3095]: E0414 00:04:34.510837 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:34.531089 kubelet[3095]: E0414 00:04:34.513909 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.412s" Apr 14 00:04:36.107639 kubelet[3095]: E0414 00:04:36.095746 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.292s" Apr 14 00:04:37.919530 kubelet[3095]: E0414 00:04:37.913155 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.099s" Apr 14 00:04:37.965803 kubelet[3095]: I0414 00:04:37.957770 3095 scope.go:117] "RemoveContainer" containerID="ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca" Apr 14 00:04:38.138676 kubelet[3095]: I0414 00:04:38.137122 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:04:38.217969 kubelet[3095]: E0414 00:04:38.204615 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:38.322702 kubelet[3095]: E0414 00:04:38.319536 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:04:39.321653 containerd[1599]: time="2026-04-14T00:04:39.287906051Z" level=info msg="RemoveContainer for \"ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca\"" Apr 14 00:04:40.928625 kubelet[3095]: E0414 00:04:40.916688 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:41.340475 containerd[1599]: time="2026-04-14T00:04:41.339668563Z" level=info msg="RemoveContainer for \"ab4b43671617960b8a726a50ad9306f348286309129ba143912f52a50cf4b9ca\" returns successfully" Apr 14 00:04:41.845001 kubelet[3095]: E0414 00:04:41.840756 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.985s" Apr 14 00:04:42.757474 kubelet[3095]: I0414 00:04:42.742014 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:04:43.298965 kubelet[3095]: E0414 00:04:43.298447 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:43.630546 kubelet[3095]: E0414 00:04:43.500065 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:04:44.044237 kubelet[3095]: E0414 00:04:44.009235 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.082s" Apr 14 00:04:46.174899 kubelet[3095]: E0414 00:04:46.146597 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:46.882659 kubelet[3095]: I0414 00:04:46.881842 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:04:46.927479 kubelet[3095]: E0414 00:04:46.923247 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:47.075663 kubelet[3095]: E0414 00:04:47.070093 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:04:51.323554 kubelet[3095]: E0414 00:04:51.320605 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:55.844752 kubelet[3095]: I0414 00:04:55.842685 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:04:55.886489 kubelet[3095]: E0414 00:04:55.880776 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:55.904560 kubelet[3095]: E0414 00:04:55.902624 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:04:56.544758 kubelet[3095]: E0414 00:04:56.540262 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:04:59.870492 kubelet[3095]: E0414 00:04:59.865758 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.055s" Apr 14 00:05:00.939866 kubelet[3095]: I0414 00:05:00.920272 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:05:00.995705 kubelet[3095]: E0414 00:05:00.991362 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:01.132399 kubelet[3095]: E0414 00:05:01.115977 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:05:01.804194 kubelet[3095]: E0414 00:05:01.801023 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:06.991431 kubelet[3095]: E0414 00:05:06.980533 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:08.218938 kubelet[3095]: I0414 00:05:08.217028 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:05:08.239989 kubelet[3095]: E0414 00:05:08.236885 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:08.284494 kubelet[3095]: E0414 00:05:08.283543 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:05:10.398273 kubelet[3095]: E0414 00:05:10.391981 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.575s" Apr 14 00:05:12.293729 kubelet[3095]: E0414 00:05:12.288553 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:14.187783 kubelet[3095]: I0414 00:05:14.184821 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:05:14.259897 kubelet[3095]: E0414 00:05:14.225909 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:14.277052 kubelet[3095]: E0414 00:05:14.262225 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:05:17.501647 kubelet[3095]: E0414 00:05:17.497609 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:19.820813 kubelet[3095]: E0414 00:05:19.818435 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.02s" Apr 14 00:05:21.933698 kubelet[3095]: I0414 00:05:21.925035 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:05:21.977483 kubelet[3095]: E0414 00:05:21.974774 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:22.105665 kubelet[3095]: E0414 00:05:22.097068 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(39798d73a6894e44ae801eb773bf9a39)\"" pod="kube-system/kube-scheduler-localhost" podUID="39798d73a6894e44ae801eb773bf9a39" Apr 14 00:05:22.697539 kubelet[3095]: E0414 00:05:22.677914 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:26.115744 kubelet[3095]: E0414 00:05:26.113537 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.31s" Apr 14 00:05:26.884309 kubelet[3095]: I0414 00:05:26.883639 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:05:26.891067 kubelet[3095]: E0414 00:05:26.884580 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:26.891067 kubelet[3095]: E0414 00:05:26.889296 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:05:27.887422 kubelet[3095]: E0414 00:05:27.883957 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:30.761482 kubelet[3095]: E0414 00:05:30.759658 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.921s" Apr 14 00:05:30.833035 kubelet[3095]: E0414 00:05:30.826193 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:33.338252 kubelet[3095]: E0414 00:05:33.288585 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:36.920184 kubelet[3095]: I0414 00:05:36.918601 3095 scope.go:117] "RemoveContainer" containerID="4c567dde22f4aceefbdc9289f1c32e56387ee9a4f84d27adcefb2aa0c4d74a16" Apr 14 00:05:36.935957 kubelet[3095]: E0414 00:05:36.933598 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:37.904683 containerd[1599]: time="2026-04-14T00:05:37.894038106Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Apr 14 00:05:38.566949 kubelet[3095]: E0414 00:05:38.554514 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:40.407509 containerd[1599]: time="2026-04-14T00:05:40.406210269Z" level=info msg="CreateContainer within sandbox \"fd39e6ba049f25172b871051ff662688823f734c044d1ef9098be63aa4680f60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"10d02d25715689d670ceda560f31840f5c3e1d1a90ee9bfa67bf89f888cd6214\"" Apr 14 00:05:41.177755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766963867.mount: Deactivated successfully. Apr 14 00:05:42.432317 containerd[1599]: time="2026-04-14T00:05:42.431808289Z" level=info msg="StartContainer for \"10d02d25715689d670ceda560f31840f5c3e1d1a90ee9bfa67bf89f888cd6214\"" Apr 14 00:05:42.655556 kubelet[3095]: E0414 00:05:42.633063 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.802s" Apr 14 00:05:42.888640 kubelet[3095]: I0414 00:05:42.882071 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:05:42.908484 kubelet[3095]: E0414 00:05:42.904849 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:43.002585 kubelet[3095]: E0414 00:05:43.001557 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:05:44.642515 kubelet[3095]: E0414 00:05:44.638230 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:44.706696 kubelet[3095]: E0414 00:05:44.705914 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.993s" Apr 14 00:05:46.137790 kubelet[3095]: E0414 00:05:46.125734 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.404s" Apr 14 00:05:47.549675 kubelet[3095]: E0414 00:05:47.547085 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.419s" Apr 14 00:05:50.938512 kubelet[3095]: E0414 00:05:50.935943 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.381s" Apr 14 00:05:50.974815 kubelet[3095]: E0414 00:05:50.945765 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:52.963288 kubelet[3095]: E0414 00:05:52.961726 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.018s" Apr 14 00:05:55.013554 kubelet[3095]: I0414 00:05:55.010846 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:05:55.033716 kubelet[3095]: E0414 00:05:55.026875 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:05:55.036828 kubelet[3095]: E0414 00:05:55.035594 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:05:56.402652 kubelet[3095]: E0414 00:05:56.394902 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:05:58.026511 kubelet[3095]: E0414 00:05:57.929951 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.093s" Apr 14 00:06:00.323161 kubelet[3095]: E0414 00:06:00.315215 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.518s" Apr 14 00:06:02.000514 kubelet[3095]: E0414 00:06:01.997054 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:03.410729 containerd[1599]: time="2026-04-14T00:06:03.376014756Z" level=info msg="StartContainer for \"10d02d25715689d670ceda560f31840f5c3e1d1a90ee9bfa67bf89f888cd6214\" returns successfully" Apr 14 00:06:05.108542 kubelet[3095]: E0414 00:06:05.098449 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.234s" Apr 14 00:06:06.515306 kubelet[3095]: E0414 00:06:06.513626 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.405s" Apr 14 00:06:06.870595 kubelet[3095]: E0414 00:06:06.864068 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:07.231352 kubelet[3095]: I0414 00:06:07.211892 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:06:07.688619 kubelet[3095]: E0414 00:06:07.686907 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:07.994269 kubelet[3095]: E0414 00:06:07.934986 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:08.177344 kubelet[3095]: E0414 00:06:08.173087 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:06:11.039689 kubelet[3095]: E0414 00:06:11.017002 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.166s" Apr 14 00:06:11.690965 kubelet[3095]: E0414 00:06:11.687047 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:12.599620 kubelet[3095]: E0414 00:06:12.594078 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.444s" Apr 14 00:06:13.493461 kubelet[3095]: E0414 00:06:13.490524 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:14.410610 kubelet[3095]: E0414 00:06:14.407897 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.59s" Apr 14 00:06:14.643828 kubelet[3095]: E0414 00:06:14.640816 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:18.191009 kubelet[3095]: E0414 00:06:18.181667 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.246s" Apr 14 00:06:19.578658 kubelet[3095]: E0414 00:06:19.542041 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:21.875711 kubelet[3095]: E0414 00:06:21.845874 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.641s" Apr 14 00:06:23.213364 kubelet[3095]: E0414 00:06:23.133060 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.233s" Apr 14 00:06:23.311981 kubelet[3095]: I0414 00:06:23.310477 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:06:23.392289 kubelet[3095]: E0414 00:06:23.390679 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:23.478928 kubelet[3095]: E0414 00:06:23.405751 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:06:24.797558 kubelet[3095]: E0414 00:06:24.793589 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:30.101579 kubelet[3095]: E0414 00:06:30.088810 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.281s" Apr 14 00:06:30.544769 kubelet[3095]: E0414 00:06:30.537949 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:32.819650 kubelet[3095]: E0414 00:06:32.808820 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.993s" Apr 14 00:06:36.206860 kubelet[3095]: E0414 00:06:36.201915 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.386s" Apr 14 00:06:37.183461 kubelet[3095]: E0414 00:06:37.179708 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:39.010971 kubelet[3095]: E0414 00:06:39.009425 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.704s" Apr 14 00:06:41.088815 kubelet[3095]: I0414 00:06:40.926694 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:06:42.068602 kubelet[3095]: E0414 00:06:42.067656 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:42.119760 kubelet[3095]: E0414 00:06:41.672028 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:43.120538 kubelet[3095]: E0414 00:06:42.579033 3095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 00:06:44.396906 kubelet[3095]: E0414 00:06:44.220622 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:45.864595 kubelet[3095]: E0414 00:06:45.823015 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.717s" Apr 14 00:06:46.500416 kubelet[3095]: E0414 00:06:46.497556 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:48.012057 kubelet[3095]: E0414 00:06:48.011803 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.004s" Apr 14 00:06:50.120522 kubelet[3095]: E0414 00:06:50.112026 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:50.150616 kubelet[3095]: E0414 00:06:50.148165 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.133s" Apr 14 00:06:50.896289 kubelet[3095]: E0414 00:06:50.895365 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:54.617712 kubelet[3095]: E0414 00:06:54.611802 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.796s" Apr 14 00:06:55.337354 kubelet[3095]: E0414 00:06:55.336818 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:06:57.945530 kubelet[3095]: I0414 00:06:57.940716 3095 scope.go:117] "RemoveContainer" containerID="da4b1d7eb0c219e1cca63742737a19d7218717361cc18e524901f8337d8c76e7" Apr 14 00:06:58.007787 kubelet[3095]: E0414 00:06:58.001831 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:06:59.727078 containerd[1599]: time="2026-04-14T00:06:59.709558755Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}" Apr 14 00:07:01.660570 kubelet[3095]: E0414 00:07:01.654661 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:07:01.852946 containerd[1599]: time="2026-04-14T00:07:01.852752733Z" level=info msg="CreateContainer within sandbox \"6eb4e023684f0aae0c0b695bdae7b9e1c4c88cd28d650a1756ae84c21f4b692c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"0bdc32aae5275058338376fd9a489c4a341540b8deb88c3fadf4efc8aa490270\"" Apr 14 00:07:02.644624 containerd[1599]: time="2026-04-14T00:07:02.602849551Z" level=info msg="StartContainer for \"0bdc32aae5275058338376fd9a489c4a341540b8deb88c3fadf4efc8aa490270\"" Apr 14 00:07:02.824960 kubelet[3095]: E0414 00:07:02.824091 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.02s" Apr 14 00:07:04.928787 kubelet[3095]: E0414 00:07:04.915940 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.088s" Apr 14 00:07:06.362331 kubelet[3095]: E0414 00:07:06.361638 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.431s" Apr 14 00:07:07.340254 systemd[1]: run-containerd-runc-k8s.io-0bdc32aae5275058338376fd9a489c4a341540b8deb88c3fadf4efc8aa490270-runc.8r7TQv.mount: Deactivated successfully. Apr 14 00:07:08.200376 kubelet[3095]: E0414 00:07:08.186429 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:07:09.797567 containerd[1599]: time="2026-04-14T00:07:09.791735265Z" level=info msg="StartContainer for \"0bdc32aae5275058338376fd9a489c4a341540b8deb88c3fadf4efc8aa490270\" returns successfully" Apr 14 00:07:16.784348 kubelet[3095]: E0414 00:07:16.782871 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:07:21.145672 kubelet[3095]: E0414 00:07:21.134322 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.226s" Apr 14 00:07:27.209866 kubelet[3095]: E0414 00:07:26.998300 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:07:36.365639 kubelet[3095]: E0414 00:07:35.917002 3095 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:07:37.012552 kubelet[3095]: E0414 00:07:37.011731 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.592s" Apr 14 00:07:37.233887 kubelet[3095]: E0414 00:07:37.233535 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:07:40.202906 kubelet[3095]: E0414 00:07:40.199933 3095 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.405s" Apr 14 00:07:43.166879 kubelet[3095]: E0414 00:07:43.162752 3095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"