Apr 20 20:44:40.816977 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 20:44:40.817100 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 20:44:40.817112 kernel: BIOS-provided physical RAM map: Apr 20 20:44:40.817120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 20 20:44:40.817128 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 20 20:44:40.817138 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 20 20:44:40.817150 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 20 20:44:40.817158 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 20 20:44:40.817168 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 20 20:44:40.817179 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 20 20:44:40.817190 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 20:44:40.817228 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 20 20:44:40.817237 kernel: NX (Execute Disable) protection: active Apr 20 20:44:40.817246 kernel: APIC: Static calls initialized Apr 20 20:44:40.817258 kernel: SMBIOS 2.8 present. Apr 20 20:44:40.817268 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 20 20:44:40.817278 kernel: DMI: Memory slots populated: 1/1 Apr 20 20:44:40.817289 kernel: Hypervisor detected: KVM Apr 20 20:44:40.817299 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 20 20:44:40.817307 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 20:44:40.817319 kernel: kvm-clock: using sched offset of 11493602070 cycles Apr 20 20:44:40.817331 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 20:44:40.817344 kernel: tsc: Detected 2793.438 MHz processor Apr 20 20:44:40.817356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 20:44:40.817368 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 20:44:40.817380 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 20 20:44:40.817393 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 20 20:44:40.817402 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 20:44:40.817415 kernel: Using GB pages for direct mapping Apr 20 20:44:40.817423 kernel: ACPI: Early table checksum verification disabled Apr 20 20:44:40.817431 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 20 20:44:40.817442 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817452 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817465 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817473 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 20 20:44:40.817481 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817489 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817500 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817508 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 20:44:40.817517 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 20 20:44:40.817533 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 20 20:44:40.817545 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 20 20:44:40.817554 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 20 20:44:40.817566 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 20 20:44:40.817575 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 20 20:44:40.817589 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 20 20:44:40.817598 kernel: No NUMA configuration found Apr 20 20:44:40.817608 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 20 20:44:40.817664 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 20 20:44:40.817675 kernel: Zone ranges: Apr 20 20:44:40.817686 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 20:44:40.817700 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 20 20:44:40.817716 kernel: Normal empty Apr 20 20:44:40.817726 kernel: Device empty Apr 20 20:44:40.817736 kernel: Movable zone start for each node Apr 20 20:44:40.817744 kernel: Early memory node ranges Apr 20 20:44:40.817752 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 20 20:44:40.817760 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 20 20:44:40.817768 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 20 20:44:40.817777 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 20:44:40.817787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 20 20:44:40.817795 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 20 20:44:40.817803 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 20:44:40.817811 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 20:44:40.817819 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 20:44:40.817829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 20:44:40.817840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 20:44:40.817853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 20:44:40.817864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 20:44:40.817874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 20:44:40.817885 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 20:44:40.817896 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 20:44:40.817906 kernel: TSC deadline timer available Apr 20 20:44:40.817917 kernel: CPU topo: Max. logical packages: 1 Apr 20 20:44:40.817930 kernel: CPU topo: Max. logical dies: 1 Apr 20 20:44:40.817940 kernel: CPU topo: Max. dies per package: 1 Apr 20 20:44:40.817951 kernel: CPU topo: Max. threads per core: 1 Apr 20 20:44:40.817962 kernel: CPU topo: Num. cores per package: 4 Apr 20 20:44:40.817972 kernel: CPU topo: Num. threads per package: 4 Apr 20 20:44:40.817983 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 20:44:40.817993 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 20:44:40.818005 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 20:44:40.818017 kernel: kvm-guest: setup PV sched yield Apr 20 20:44:40.818028 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 20 20:44:40.818039 kernel: Booting paravirtualized kernel on KVM Apr 20 20:44:40.818050 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 20:44:40.818062 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 20:44:40.818072 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 20:44:40.818081 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 20:44:40.818091 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 20:44:40.818100 kernel: kvm-guest: PV spinlocks enabled Apr 20 20:44:40.818110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 20:44:40.818120 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 20:44:40.818131 kernel: random: crng init done Apr 20 20:44:40.818140 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 20:44:40.818150 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 20:44:40.818159 kernel: Fallback order for Node 0: 0 Apr 20 20:44:40.818168 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 20 20:44:40.818177 kernel: Policy zone: DMA32 Apr 20 20:44:40.818186 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 20:44:40.818196 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 20:44:40.818233 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 20:44:40.818241 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 20:44:40.818252 kernel: Dynamic Preempt: voluntary Apr 20 20:44:40.818261 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 20:44:40.818273 kernel: rcu: RCU event tracing is enabled. Apr 20 20:44:40.818283 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 20:44:40.818293 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 20:44:40.818304 kernel: Rude variant of Tasks RCU enabled. Apr 20 20:44:40.818315 kernel: Tracing variant of Tasks RCU enabled. Apr 20 20:44:40.818324 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 20:44:40.818338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 20:44:40.818348 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 20:44:40.818359 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 20:44:40.818369 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 20:44:40.818379 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 20:44:40.818389 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 20:44:40.818400 kernel: Console: colour VGA+ 80x25 Apr 20 20:44:40.818421 kernel: printk: legacy console [ttyS0] enabled Apr 20 20:44:40.818432 kernel: ACPI: Core revision 20240827 Apr 20 20:44:40.818445 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 20:44:40.818455 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 20:44:40.818464 kernel: x2apic enabled Apr 20 20:44:40.818474 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 20:44:40.818485 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 20:44:40.818495 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 20:44:40.818505 kernel: kvm-guest: setup PV IPIs Apr 20 20:44:40.818514 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 20:44:40.818522 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 20:44:40.818532 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 20:44:40.818541 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 20:44:40.818551 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 20:44:40.818559 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 20:44:40.818569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 20:44:40.818577 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 20:44:40.818586 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 20:44:40.818594 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 20:44:40.818604 kernel: RETBleed: Vulnerable Apr 20 20:44:40.818614 kernel: Speculative Store Bypass: Vulnerable Apr 20 20:44:40.818702 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 20:44:40.818711 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 20:44:40.818721 kernel: active return thunk: its_return_thunk Apr 20 20:44:40.818731 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 20:44:40.818740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 20:44:40.818748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 20:44:40.818760 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 20:44:40.818770 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 20:44:40.818779 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 20:44:40.818788 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 20:44:40.818796 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 20:44:40.818804 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 20:44:40.818814 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 20:44:40.818825 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 20:44:40.818833 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 20:44:40.818843 kernel: Freeing SMP alternatives memory: 32K Apr 20 20:44:40.818852 kernel: pid_max: default: 32768 minimum: 301 Apr 20 20:44:40.818861 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 20:44:40.818869 kernel: landlock: Up and running. Apr 20 20:44:40.818878 kernel: SELinux: Initializing. Apr 20 20:44:40.818889 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 20:44:40.818900 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 20:44:40.818910 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 20:44:40.818920 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 20:44:40.818931 kernel: signal: max sigframe size: 3632 Apr 20 20:44:40.818942 kernel: rcu: Hierarchical SRCU implementation. Apr 20 20:44:40.818953 kernel: rcu: Max phase no-delay instances is 400. Apr 20 20:44:40.818966 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 20:44:40.818977 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 20:44:40.818988 kernel: smp: Bringing up secondary CPUs ... Apr 20 20:44:40.818999 kernel: smpboot: x86: Booting SMP configuration: Apr 20 20:44:40.819010 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 20:44:40.819021 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 20:44:40.819031 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 20:44:40.819045 kernel: Memory: 2444328K/2571752K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 121532K reserved, 0K cma-reserved) Apr 20 20:44:40.819057 kernel: devtmpfs: initialized Apr 20 20:44:40.819068 kernel: x86/mm: Memory block size: 128MB Apr 20 20:44:40.819079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 20:44:40.819090 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 20:44:40.819100 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 20:44:40.819108 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 20:44:40.819119 kernel: audit: initializing netlink subsys (disabled) Apr 20 20:44:40.819127 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 20:44:40.819136 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 20:44:40.819147 kernel: audit: type=2000 audit(1776717868.351:1): state=initialized audit_enabled=0 res=1 Apr 20 20:44:40.819158 kernel: cpuidle: using governor menu Apr 20 20:44:40.819168 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 20:44:40.819178 kernel: dca service started, version 1.12.1 Apr 20 20:44:40.819190 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 20 20:44:40.819227 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 20 20:44:40.819237 kernel: PCI: Using configuration type 1 for base access Apr 20 20:44:40.819246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 20:44:40.819255 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 20:44:40.819266 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 20:44:40.819277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 20:44:40.819291 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 20:44:40.819301 kernel: ACPI: Added _OSI(Module Device) Apr 20 20:44:40.819312 kernel: ACPI: Added _OSI(Processor Device) Apr 20 20:44:40.819323 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 20:44:40.819333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 20:44:40.819344 kernel: ACPI: Interpreter enabled Apr 20 20:44:40.819355 kernel: ACPI: PM: (supports S0 S3 S5) Apr 20 20:44:40.819365 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 20:44:40.819378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 20:44:40.819389 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 20:44:40.819399 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 20:44:40.819411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 20:44:40.819902 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 20:44:40.820136 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 20:44:40.820315 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 20:44:40.820330 kernel: PCI host bridge to bus 0000:00 Apr 20 20:44:40.820981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 20:44:40.821130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 20:44:40.821287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 20:44:40.821415 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 20 20:44:40.821577 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 20 20:44:40.821747 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 20 20:44:40.821863 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 20:44:40.822016 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 20:44:40.822157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 20:44:40.822331 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 20 20:44:40.822468 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 20 20:44:40.822598 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 20 20:44:40.822959 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 20:44:40.823111 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 20:44:40.823413 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 20 20:44:40.824157 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 20 20:44:40.824321 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 20 20:44:40.824470 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 20:44:40.824604 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 20 20:44:40.824803 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 20 20:44:40.824938 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 20 20:44:40.825038 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 20:44:40.825128 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 20 20:44:40.825472 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 20 20:44:40.825594 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 20 20:44:40.825738 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 20 20:44:40.829457 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 20:44:40.829691 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 20:44:40.830998 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 20:44:40.831162 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 20 20:44:40.831927 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 20 20:44:40.832041 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 20:44:40.832313 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 20 20:44:40.832332 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 20:44:40.832341 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 20:44:40.832351 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 20:44:40.832360 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 20:44:40.832370 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 20:44:40.833538 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 20:44:40.833555 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 20:44:40.833567 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 20:44:40.833578 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 20:44:40.833589 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 20:44:40.833600 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 20:44:40.833611 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 20:44:40.833674 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 20:44:40.833684 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 20:44:40.833695 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 20:44:40.833706 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 20:44:40.833716 kernel: iommu: Default domain type: Translated Apr 20 20:44:40.833725 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 20:44:40.833736 kernel: PCI: Using ACPI for IRQ routing Apr 20 20:44:40.833747 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 20:44:40.833757 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 20 20:44:40.833766 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 20 20:44:40.837509 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 20:44:40.837708 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 20:44:40.837830 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 20:44:40.837841 kernel: vgaarb: loaded Apr 20 20:44:40.837852 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 20:44:40.837859 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 20:44:40.837866 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 20:44:40.837872 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 20:44:40.837879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 20:44:40.837885 kernel: pnp: PnP ACPI init Apr 20 20:44:40.837991 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 20 20:44:40.838003 kernel: pnp: PnP ACPI: found 6 devices Apr 20 20:44:40.838010 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 20:44:40.838016 kernel: NET: Registered PF_INET protocol family Apr 20 20:44:40.838022 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 20:44:40.838029 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 20:44:40.838036 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 20:44:40.838042 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 20:44:40.838050 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 20:44:40.838056 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 20:44:40.838063 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 20:44:40.838069 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 20:44:40.838075 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 20:44:40.838082 kernel: NET: Registered PF_XDP protocol family Apr 20 20:44:40.838169 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 20:44:40.838989 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 20:44:40.839083 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 20:44:40.839165 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 20 20:44:40.839279 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 20 20:44:40.839362 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 20 20:44:40.839370 kernel: PCI: CLS 0 bytes, default 64 Apr 20 20:44:40.839377 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 20:44:40.839387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 20:44:40.839394 kernel: Initialise system trusted keyrings Apr 20 20:44:40.839400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 20:44:40.839407 kernel: Key type asymmetric registered Apr 20 20:44:40.839413 kernel: Asymmetric key parser 'x509' registered Apr 20 20:44:40.839419 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 20:44:40.839427 kernel: io scheduler mq-deadline registered Apr 20 20:44:40.839433 kernel: io scheduler kyber registered Apr 20 20:44:40.839440 kernel: io scheduler bfq registered Apr 20 20:44:40.839446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 20:44:40.839453 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 20:44:40.839460 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 20:44:40.839466 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 20:44:40.839473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 20:44:40.839480 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 20:44:40.839487 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 20:44:40.839493 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 20:44:40.839500 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 20:44:40.839597 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 20:44:40.839606 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 20 20:44:40.840130 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 20:44:40.840993 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T20:44:34 UTC (1776717874) Apr 20 20:44:40.841127 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 20 20:44:40.841136 kernel: intel_pstate: CPU model not supported Apr 20 20:44:40.841142 kernel: NET: Registered PF_INET6 protocol family Apr 20 20:44:40.841149 kernel: Segment Routing with IPv6 Apr 20 20:44:40.841156 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 20:44:40.841166 kernel: NET: Registered PF_PACKET protocol family Apr 20 20:44:40.841173 kernel: Key type dns_resolver registered Apr 20 20:44:40.841179 kernel: IPI shorthand broadcast: enabled Apr 20 20:44:40.841186 kernel: sched_clock: Marking stable (3839051599, 2131997489)->(7152267188, -1181218100) Apr 20 20:44:40.841192 kernel: registered taskstats version 1 Apr 20 20:44:40.841199 kernel: Loading compiled-in X.509 certificates Apr 20 20:44:40.841233 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 20:44:40.841242 kernel: Demotion targets for Node 0: null Apr 20 20:44:40.841248 kernel: Key type .fscrypt registered Apr 20 20:44:40.841255 kernel: Key type fscrypt-provisioning registered Apr 20 20:44:40.841261 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 20:44:40.841267 kernel: ima: Allocated hash algorithm: sha1 Apr 20 20:44:40.841274 kernel: ima: No architecture policies found Apr 20 20:44:40.841280 kernel: clk: Disabling unused clocks Apr 20 20:44:40.841286 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 20:44:40.841294 kernel: Write protecting the kernel read-only data: 47104k Apr 20 20:44:40.841301 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 20:44:40.841307 kernel: Run /init as init process Apr 20 20:44:40.843030 kernel: with arguments: Apr 20 20:44:40.843056 kernel: /init Apr 20 20:44:40.843066 kernel: with environment: Apr 20 20:44:40.843075 kernel: HOME=/ Apr 20 20:44:40.843122 kernel: TERM=linux Apr 20 20:44:40.843132 kernel: SCSI subsystem initialized Apr 20 20:44:40.843142 kernel: libata version 3.00 loaded. Apr 20 20:44:40.845156 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 20:44:40.845183 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 20:44:40.845858 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 20:44:40.847945 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 20:44:40.856372 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 20:44:40.858493 kernel: scsi host0: ahci Apr 20 20:44:40.860576 kernel: scsi host1: ahci Apr 20 20:44:40.860888 kernel: scsi host2: ahci Apr 20 20:44:40.861085 kernel: scsi host3: ahci Apr 20 20:44:40.862060 kernel: scsi host4: ahci Apr 20 20:44:40.864859 kernel: scsi host5: ahci Apr 20 20:44:40.864894 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Apr 20 20:44:40.864903 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Apr 20 20:44:40.864911 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Apr 20 20:44:40.864958 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Apr 20 20:44:40.864966 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Apr 20 20:44:40.864974 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Apr 20 20:44:40.864982 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 20:44:40.864990 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 20:44:40.864998 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 20:44:40.865006 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 20:44:40.865016 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 20:44:40.865025 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 20:44:40.865033 kernel: ata3.00: applying bridge limits Apr 20 20:44:40.865042 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 20:44:40.865050 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 20:44:40.865058 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 20:44:40.865066 kernel: ata3.00: configured for UDMA/100 Apr 20 20:44:40.865313 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 20:44:40.865441 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 20:44:40.865544 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 20:44:40.865554 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 20:44:40.865562 kernel: GPT:16515071 != 27000831 Apr 20 20:44:40.865570 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 20:44:40.866054 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 20:44:40.866068 kernel: GPT:16515071 != 27000831 Apr 20 20:44:40.866075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 20:44:40.866081 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 20:44:40.866088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 20:44:40.866195 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 20:44:40.866963 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 20:44:40.866990 kernel: device-mapper: uevent: version 1.0.3 Apr 20 20:44:40.867003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 20:44:40.867014 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 20:44:40.867027 kernel: raid6: avx512x4 gen() 24125 MB/s Apr 20 20:44:40.867040 kernel: raid6: avx512x2 gen() 28081 MB/s Apr 20 20:44:40.867051 kernel: raid6: avx512x1 gen() 26021 MB/s Apr 20 20:44:40.867063 kernel: raid6: avx2x4 gen() 10211 MB/s Apr 20 20:44:40.867074 kernel: raid6: avx2x2 gen() 19656 MB/s Apr 20 20:44:40.867085 kernel: raid6: avx2x1 gen() 10964 MB/s Apr 20 20:44:40.867095 kernel: raid6: using algorithm avx512x2 gen() 28081 MB/s Apr 20 20:44:40.867106 kernel: raid6: .... xor() 14767 MB/s, rmw enabled Apr 20 20:44:40.867118 kernel: raid6: using avx512x2 recovery algorithm Apr 20 20:44:40.867130 kernel: xor: automatically using best checksumming function avx Apr 20 20:44:40.867142 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 20:44:40.867155 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (179) Apr 20 20:44:40.867166 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 20:44:40.867176 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 20:44:40.867189 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 20:44:40.867483 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 20:44:40.867504 kernel: loop: module loaded Apr 20 20:44:40.867514 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 20:44:40.867526 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 20:44:40.867537 kernel: hrtimer: interrupt took 7114370 ns Apr 20 20:44:40.867550 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 20:44:40.867564 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 20:44:40.867577 systemd[1]: Successfully made /usr/ read-only. Apr 20 20:44:40.867590 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 20:44:40.867601 systemd[1]: Detected virtualization kvm. Apr 20 20:44:40.867612 systemd[1]: Detected architecture x86-64. Apr 20 20:44:40.868193 systemd[1]: Running in initrd. Apr 20 20:44:40.868228 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 20:44:40.868253 systemd[1]: No hostname configured, using default hostname. Apr 20 20:44:40.868260 systemd[1]: Hostname set to . Apr 20 20:44:40.868267 systemd[1]: Queued start job for default target initrd.target. Apr 20 20:44:40.868274 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 20:44:40.868281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 20:44:40.868288 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 20:44:40.868298 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 20:44:40.868305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 20:44:40.868312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 20:44:40.868319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 20:44:40.868325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 20:44:40.868332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 20:44:40.868340 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 20:44:40.868347 systemd[1]: Reached target paths.target - Path Units. Apr 20 20:44:40.868354 systemd[1]: Reached target slices.target - Slice Units. Apr 20 20:44:40.868361 systemd[1]: Reached target swap.target - Swaps. Apr 20 20:44:40.868367 systemd[1]: Reached target timers.target - Timer Units. Apr 20 20:44:40.868374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 20:44:40.868381 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 20:44:40.868389 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 20:44:40.868396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 20:44:40.868403 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 20:44:40.868410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 20:44:40.868420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 20:44:40.868431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 20:44:40.868440 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 20:44:40.868452 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 20:44:40.868461 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 20:44:40.868471 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 20:44:40.868482 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 20:44:40.868493 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 20:44:40.868504 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 20:44:40.868517 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 20:44:40.868529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 20:44:40.868540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 20:44:40.868551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 20:44:40.868563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 20:44:40.868572 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 20:44:40.868583 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 20:44:40.869067 systemd-journald[317]: Collecting audit messages is enabled. Apr 20 20:44:40.869095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 20:44:40.869102 kernel: Bridge firewalling registered Apr 20 20:44:40.869111 systemd-journald[317]: Journal started Apr 20 20:44:40.869129 systemd-journald[317]: Runtime Journal (/run/log/journal/4d3f8bcfb51f485f86a595dcac9be9f8) is 6M, max 48.1M, 42.1M free. Apr 20 20:44:40.868711 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 20 20:44:41.038257 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 20:44:41.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.042357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 20:44:41.049888 kernel: audit: type=1130 audit(1776717881.040:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.062065 kernel: audit: type=1130 audit(1776717881.053:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.062357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 20:44:41.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.086564 kernel: audit: type=1130 audit(1776717881.065:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.079249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 20:44:41.145461 kernel: audit: type=1130 audit(1776717881.084:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.157080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 20:44:41.169194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 20:44:41.180601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 20:44:41.201797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 20:44:41.237487 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 20:44:41.244469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 20:44:41.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.255241 kernel: audit: type=1130 audit(1776717881.245:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.256340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 20:44:41.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.271185 kernel: audit: type=1130 audit(1776717881.262:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.273415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 20:44:41.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.278000 audit: BPF prog-id=5 op=LOAD Apr 20 20:44:41.282158 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 20:44:41.289439 kernel: audit: type=1130 audit(1776717881.275:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.289470 kernel: audit: type=1334 audit(1776717881.278:9): prog-id=5 op=LOAD Apr 20 20:44:41.331396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 20:44:41.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.340902 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 20:44:41.349921 kernel: audit: type=1130 audit(1776717881.338:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.456588 dracut-cmdline[360]: dracut-109 Apr 20 20:44:41.475877 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 20:44:41.553993 systemd-resolved[355]: Positive Trust Anchors: Apr 20 20:44:41.554027 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 20:44:41.554030 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 20:44:41.554086 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 20:44:41.733121 systemd-resolved[355]: Defaulting to hostname 'linux'. Apr 20 20:44:41.781200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 20:44:41.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.846005 kernel: audit: type=1130 audit(1776717881.835:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:41.851482 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 20:44:42.771018 kernel: Loading iSCSI transport class v2.0-870. Apr 20 20:44:42.817832 kernel: iscsi: registered transport (tcp) Apr 20 20:44:42.920802 kernel: iscsi: registered transport (qla4xxx) Apr 20 20:44:42.921097 kernel: QLogic iSCSI HBA Driver Apr 20 20:44:43.055934 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 20:44:43.167379 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 20:44:43.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:43.172245 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 20:44:43.735874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 20:44:43.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:43.751019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 20:44:43.762801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 20:44:43.843282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 20:44:43.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:43.849000 audit: BPF prog-id=6 op=LOAD Apr 20 20:44:43.849000 audit: BPF prog-id=7 op=LOAD Apr 20 20:44:43.851180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 20:44:43.985508 systemd-udevd[588]: Using default interface naming scheme 'v258'. Apr 20 20:44:44.042357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 20:44:44.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:44.050287 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 20:44:44.108919 dracut-pre-trigger[660]: rd.md=0: removing MD RAID activation Apr 20 20:44:44.115994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 20:44:44.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:44.119000 audit: BPF prog-id=8 op=LOAD Apr 20 20:44:44.147952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 20:44:44.291311 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 20:44:44.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:44.296188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 20:44:44.360003 systemd-networkd[716]: lo: Link UP Apr 20 20:44:44.361123 systemd-networkd[716]: lo: Gained carrier Apr 20 20:44:44.368117 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 20:44:44.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:44.371553 systemd[1]: Reached target network.target - Network. Apr 20 20:44:45.046489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 20:44:45.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:45.055457 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 20:44:45.321108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 20:44:45.374514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 20:44:45.401528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 20:44:45.428606 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 20:44:45.442192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 20:44:45.458390 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 20:44:45.474742 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 20 20:44:45.748866 disk-uuid[776]: Primary Header is updated. Apr 20 20:44:45.748866 disk-uuid[776]: Secondary Entries is updated. Apr 20 20:44:45.748866 disk-uuid[776]: Secondary Header is updated. Apr 20 20:44:45.759609 kernel: AES CTR mode by8 optimization enabled Apr 20 20:44:45.849846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 20:44:45.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:45.849937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 20:44:45.854049 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 20:44:45.859694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 20:44:45.953367 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 20:44:45.957313 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 20:44:45.974171 systemd-networkd[716]: eth0: Link UP Apr 20 20:44:45.985700 systemd-networkd[716]: eth0: Gained carrier Apr 20 20:44:45.985717 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 20:44:46.040817 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 20:44:46.276813 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 20:44:46.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.298743 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 20 20:44:46.298700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 20:44:46.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.314027 kernel: audit: type=1130 audit(1776717886.280:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.312219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 20:44:46.324054 kernel: audit: type=1130 audit(1776717886.305:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.316508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 20:44:46.329076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 20:44:46.357435 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 20:44:46.568090 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 20:44:46.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.585151 kernel: audit: type=1130 audit(1776717886.576:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.856603 disk-uuid[783]: Warning: The kernel is still using the old partition table. Apr 20 20:44:46.856603 disk-uuid[783]: The new table will be used at the next reboot or after you Apr 20 20:44:46.856603 disk-uuid[783]: run partprobe(8) or kpartx(8) Apr 20 20:44:46.856603 disk-uuid[783]: The operation has completed successfully. Apr 20 20:44:46.951264 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 20:44:46.955891 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 20:44:46.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.980775 kernel: audit: type=1130 audit(1776717886.968:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.981654 kernel: audit: type=1131 audit(1776717886.969:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:46.989038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 20:44:47.235104 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Apr 20 20:44:47.244594 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 20:44:47.245174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 20:44:47.279926 kernel: BTRFS info (device vda6): turning on async discard Apr 20 20:44:47.281961 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 20:44:47.333302 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 20:44:47.353833 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 20:44:47.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:47.367979 kernel: audit: type=1130 audit(1776717887.360:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:47.374065 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 20:44:47.518090 systemd-networkd[716]: eth0: Gained IPv6LL Apr 20 20:44:49.444448 ignition[917]: Ignition 2.24.0 Apr 20 20:44:49.444574 ignition[917]: Stage: fetch-offline Apr 20 20:44:49.446134 ignition[917]: no configs at "/usr/lib/ignition/base.d" Apr 20 20:44:49.446173 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:44:49.447878 ignition[917]: parsed url from cmdline: "" Apr 20 20:44:49.447882 ignition[917]: no config URL provided Apr 20 20:44:49.448126 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 20:44:49.448137 ignition[917]: no config at "/usr/lib/ignition/user.ign" Apr 20 20:44:49.448210 ignition[917]: op(1): [started] loading QEMU firmware config module Apr 20 20:44:49.448215 ignition[917]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 20:44:49.532017 ignition[917]: op(1): [finished] loading QEMU firmware config module Apr 20 20:44:49.658944 ignition[917]: parsing config with SHA512: 4fcf9651b05f6a4c532cd0fa9bd9f083ecfb7613d543cfb4fd586e61eefd537784a4a1d4381983c060d1f2fec52c23d67e9e1651578ad7d64be112610d3ef5d6 Apr 20 20:44:50.065678 unknown[917]: fetched base config from "system" Apr 20 20:44:50.065694 unknown[917]: fetched user config from "qemu" Apr 20 20:44:50.109378 ignition[917]: fetch-offline: fetch-offline passed Apr 20 20:44:50.111183 ignition[917]: Ignition finished successfully Apr 20 20:44:50.120305 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 20:44:50.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:50.133800 kernel: audit: type=1130 audit(1776717890.123:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:50.126825 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 20:44:50.137050 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 20:44:51.068824 ignition[927]: Ignition 2.24.0 Apr 20 20:44:51.068956 ignition[927]: Stage: kargs Apr 20 20:44:51.075087 ignition[927]: no configs at "/usr/lib/ignition/base.d" Apr 20 20:44:51.075099 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:44:51.150303 ignition[927]: kargs: kargs passed Apr 20 20:44:51.150704 ignition[927]: Ignition finished successfully Apr 20 20:44:51.167571 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 20:44:51.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:51.214233 kernel: audit: type=1130 audit(1776717891.173:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:51.215102 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 20:44:52.073046 ignition[935]: Ignition 2.24.0 Apr 20 20:44:52.073083 ignition[935]: Stage: disks Apr 20 20:44:52.073297 ignition[935]: no configs at "/usr/lib/ignition/base.d" Apr 20 20:44:52.073304 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:44:52.079347 ignition[935]: disks: disks passed Apr 20 20:44:52.080905 ignition[935]: Ignition finished successfully Apr 20 20:44:52.116243 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 20:44:52.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:52.131324 kernel: audit: type=1130 audit(1776717892.119:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:52.124388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 20:44:52.132162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 20:44:52.141760 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 20:44:52.146155 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 20:44:52.148470 systemd[1]: Reached target basic.target - Basic System. Apr 20 20:44:52.181919 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 20:44:52.588115 systemd-fsck[946]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 20:44:52.640005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 20:44:52.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:52.658724 kernel: audit: type=1130 audit(1776717892.647:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:52.658719 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 20:44:53.267151 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 20:44:53.278543 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 20:44:53.285051 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 20:44:53.337329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 20:44:53.359483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 20:44:53.367055 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 20:44:53.373882 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 20:44:53.374995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 20:44:53.421746 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (956) Apr 20 20:44:53.427076 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 20:44:53.434284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 20:44:53.459438 kernel: BTRFS info (device vda6): turning on async discard Apr 20 20:44:53.459568 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 20:44:53.461454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 20:44:53.462738 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 20:44:53.541064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 20:44:55.176788 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 20:44:55.182038 kernel: loop1: p1 p2 p3 Apr 20 20:44:55.442228 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:44:55.443512 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:44:55.443568 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:44:55.443577 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:44:55.445371 systemd-confext[1046]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 20:44:55.506072 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:44:56.161962 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 20:44:56.359684 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 20:44:56.370191 kernel: loop2: p1 p2 p3 Apr 20 20:44:56.515185 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:44:56.516129 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:44:56.516181 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:44:56.516843 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:44:56.521480 (sd-merge)[1058]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 20:44:56.537991 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:44:56.939287 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 20:44:56.967360 (sd-merge)[1058]: Using extensions '00-flatcar-default.raw'. Apr 20 20:44:57.040851 (sd-merge)[1058]: Merged extensions into '/sysroot/etc'. Apr 20 20:44:57.077584 initrd-setup-root[1065]: /etc 00-flatcar-default Mon 2026-04-20 20:44:41 UTC Apr 20 20:44:57.100073 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 20:44:57.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.111145 kernel: audit: type=1130 audit(1776717897.103:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.109726 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 20:44:57.116471 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 20:44:57.265129 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 20:44:57.283232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 20:44:57.336131 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 20:44:57.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.349792 kernel: audit: type=1130 audit(1776717897.341:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.657667 ignition[1074]: INFO : Ignition 2.24.0 Apr 20 20:44:57.657667 ignition[1074]: INFO : Stage: mount Apr 20 20:44:57.663016 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 20:44:57.663016 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:44:57.680223 ignition[1074]: INFO : mount: mount passed Apr 20 20:44:57.682186 ignition[1074]: INFO : Ignition finished successfully Apr 20 20:44:57.685186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 20:44:57.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.694093 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 20:44:57.698521 kernel: audit: type=1130 audit(1776717897.689:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:44:57.849073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 20:44:57.960146 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1087) Apr 20 20:44:57.960717 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 20:44:57.967324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 20:44:57.985050 kernel: BTRFS info (device vda6): turning on async discard Apr 20 20:44:58.033566 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 20:44:58.045601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 20:44:58.639910 ignition[1104]: INFO : Ignition 2.24.0 Apr 20 20:44:58.651115 ignition[1104]: INFO : Stage: files Apr 20 20:44:58.670984 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 20:44:58.684168 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:44:58.738188 ignition[1104]: DEBUG : files: compiled without relabeling support, skipping Apr 20 20:44:58.754397 ignition[1104]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 20:44:58.754397 ignition[1104]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 20:44:58.772172 ignition[1104]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 20:44:58.787192 ignition[1104]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 20:44:58.802711 unknown[1104]: wrote ssh authorized keys file for user: core Apr 20 20:44:58.810159 ignition[1104]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 20:44:58.842151 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 20:44:58.853933 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 20:44:59.500952 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 20:45:01.016211 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 20:45:01.035990 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 20:45:01.035990 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 20:45:01.035990 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 20:45:01.055929 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 20 20:45:01.730919 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 20:45:12.575478 ignition[1104]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 20:45:12.575478 ignition[1104]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 20:45:12.587891 ignition[1104]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 20:45:12.592027 ignition[1104]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 20:45:12.974604 ignition[1104]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 20:45:13.006980 ignition[1104]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 20:45:13.010154 ignition[1104]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 20:45:13.010154 ignition[1104]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 20:45:13.010154 ignition[1104]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 20:45:13.019789 ignition[1104]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 20:45:13.019789 ignition[1104]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 20:45:13.019789 ignition[1104]: INFO : files: files passed Apr 20 20:45:13.019789 ignition[1104]: INFO : Ignition finished successfully Apr 20 20:45:13.032208 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 20:45:13.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.043914 kernel: audit: type=1130 audit(1776717913.032:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.143415 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 20:45:13.153357 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 20:45:13.205594 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 20:45:13.205938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 20:45:13.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.224869 kernel: audit: type=1130 audit(1776717913.206:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.224893 kernel: audit: type=1131 audit(1776717913.206:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:13.364703 initrd-setup-root-after-ignition[1136]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 20:45:13.404745 initrd-setup-root-after-ignition[1138]: grep: Apr 20 20:45:13.406177 initrd-setup-root-after-ignition[1142]: grep: Apr 20 20:45:13.409765 initrd-setup-root-after-ignition[1138]: /sysroot/etc/flatcar/enabled-sysext.conf Apr 20 20:45:13.413297 initrd-setup-root-after-ignition[1142]: /sysroot/etc/flatcar/enabled-sysext.conf Apr 20 20:45:13.416259 initrd-setup-root-after-ignition[1138]: : No such file or directory Apr 20 20:45:13.419871 initrd-setup-root-after-ignition[1142]: : No such file or directory Apr 20 20:45:13.422180 initrd-setup-root-after-ignition[1138]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 20:45:13.465011 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 20:45:13.468788 kernel: loop3: p1 p2 p3 Apr 20 20:45:13.623980 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:13.624289 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:13.624326 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:13.631991 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:13.632409 systemd-confext[1144]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 20:45:13.668674 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:14.367802 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 20:45:14.460755 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 20:45:14.463679 kernel: loop4: p1 p2 p3 Apr 20 20:45:14.611947 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:14.617300 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:14.617285 (sd-merge)[1156]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 20:45:14.628466 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:14.628512 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:14.750429 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:15.273106 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 20:45:15.274997 (sd-merge)[1156]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 20:45:15.335870 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 20:45:15.436025 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 20:45:15.443938 kernel: loop4: p1 p2 p3 Apr 20 20:45:15.757018 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:15.757238 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:15.757249 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:15.758795 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:15.760489 systemd-sysext[1164]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 20:45:15.779841 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:16.267263 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 20:45:16.333381 kernel: loop5: detected capacity change from 0 to 178200 Apr 20 20:45:16.340708 kernel: loop5: p1 p2 p3 Apr 20 20:45:16.542837 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:16.543040 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:16.543057 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:16.544827 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:16.546079 systemd-sysext[1164]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 20:45:16.556816 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:17.092497 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 20:45:17.168163 kernel: loop6: detected capacity change from 0 to 217752 Apr 20 20:45:17.419868 kernel: loop7: detected capacity change from 0 to 378016 Apr 20 20:45:17.429810 kernel: loop7: p1 p2 p3 Apr 20 20:45:17.744849 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:17.745020 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:17.745043 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:17.746869 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:17.752313 (sd-merge)[1182]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 20:45:17.862982 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:18.252555 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 20:45:18.278027 kernel: loop1: detected capacity change from 0 to 178200 Apr 20 20:45:18.281675 kernel: loop1: p1 p2 p3 Apr 20 20:45:18.286244 kernel: loop1: p1 p2 p3 Apr 20 20:45:18.358072 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:18.359734 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:18.360197 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:18.364927 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:18.365089 (sd-merge)[1182]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:3) failed: Invalid argument Apr 20 20:45:18.376066 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:18.671906 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 20:45:18.684107 kernel: loop3: detected capacity change from 0 to 217752 Apr 20 20:45:18.830280 (sd-merge)[1182]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.35.1-x86-64.raw'. Apr 20 20:45:18.849463 (sd-merge)[1182]: Merged extensions into '/sysroot/usr'. Apr 20 20:45:18.921597 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 20:45:18.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:18.940443 kernel: audit: type=1130 audit(1776717918.928:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:18.942422 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 20:45:18.958764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 20:45:19.115992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 20:45:19.118978 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 20:45:19.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.126026 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 20:45:19.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.137871 kernel: audit: type=1130 audit(1776717919.124:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.136875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 20:45:19.143691 kernel: audit: type=1131 audit(1776717919.125:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.139053 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 20:45:19.181890 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 20:45:19.209080 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 20:45:19.527256 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 20:45:19.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.545359 kernel: audit: type=1130 audit(1776717919.531:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.549288 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 20:45:19.839265 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 20:45:19.844973 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 20:45:19.848682 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 20:45:19.858243 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 20:45:19.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.859102 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 20:45:19.869167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 20:45:19.885801 kernel: audit: type=1131 audit(1776717919.867:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:19.885673 systemd[1]: Stopped target basic.target - Basic System. Apr 20 20:45:19.894480 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 20:45:19.921232 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 20:45:19.936316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 20:45:19.943235 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 20:45:19.949025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 20:45:19.955422 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 20:45:19.974103 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 20:45:19.987092 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 20:45:20.022485 systemd[1]: Stopped target swap.target - Swaps. Apr 20 20:45:20.036519 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 20:45:20.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.050192 kernel: audit: type=1131 audit(1776717920.039:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.038334 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 20:45:20.041439 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 20:45:20.053194 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 20:45:20.065052 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 20:45:20.071502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 20:45:20.083031 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 20:45:20.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.155089 kernel: audit: type=1131 audit(1776717920.147:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.139347 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 20:45:20.155172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 20:45:20.159343 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 20:45:20.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.174533 systemd[1]: ignition-fetch-offline.service: Consumed 2.005s CPU time. Apr 20 20:45:20.177920 systemd[1]: Stopped target paths.target - Path Units. Apr 20 20:45:20.188041 kernel: audit: type=1131 audit(1776717920.168:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.193965 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 20:45:20.200602 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 20:45:20.208908 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 20:45:20.214570 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 20:45:20.221266 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 20:45:20.223456 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 20:45:20.239584 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 20:45:20.244289 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 20:45:20.254107 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 20:45:20.254312 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 20:45:20.265214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 20:45:20.268517 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 20:45:20.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.286306 kernel: audit: type=1131 audit(1776717920.270:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.272682 systemd[1]: initrd-setup-root-after-ignition.service: Consumed 1.722s CPU time. Apr 20 20:45:20.351062 kernel: audit: type=1131 audit(1776717920.284:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.275095 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 20:45:20.275297 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 20:45:20.284942 systemd[1]: ignition-files.service: Consumed 14.741s CPU time. Apr 20 20:45:20.363015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 20:45:20.375252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 20:45:20.385038 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 20:45:20.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.385570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 20:45:20.388228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 20:45:20.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.397733 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 20:45:20.404453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 20:45:20.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.404541 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 20:45:20.420012 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 20:45:20.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.421184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 20:45:20.551342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 20:45:20.571407 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 20:45:20.571585 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 20:45:20.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.588587 ignition[1211]: INFO : Ignition 2.24.0 Apr 20 20:45:20.588587 ignition[1211]: INFO : Stage: umount Apr 20 20:45:20.618995 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 20:45:20.618995 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 20:45:20.626898 ignition[1211]: INFO : umount: umount passed Apr 20 20:45:20.626898 ignition[1211]: INFO : Ignition finished successfully Apr 20 20:45:20.631875 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 20:45:20.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.632142 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 20:45:20.637536 systemd[1]: Stopped target network.target - Network. Apr 20 20:45:20.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.638932 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 20:45:20.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.639055 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 20:45:20.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.643336 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 20:45:20.644329 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 20:45:20.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.648459 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 20:45:20.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.648589 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 20:45:20.650599 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 20:45:20.655106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 20:45:20.659122 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 20:45:20.659231 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 20:45:20.660400 systemd[1]: initrd-setup-root.service: Consumed 1.577s CPU time. Apr 20 20:45:20.662059 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 20:45:20.671610 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 20:45:20.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.690140 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 20:45:20.690342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 20:45:20.696170 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 20:45:20.697188 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 20:45:20.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.704000 audit: BPF prog-id=5 op=UNLOAD Apr 20 20:45:20.708251 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 20:45:20.709052 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 20:45:20.712000 audit: BPF prog-id=8 op=UNLOAD Apr 20 20:45:20.709118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 20:45:20.717594 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 20:45:20.723398 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 20:45:20.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.723491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 20:45:20.724697 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 20:45:20.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.724731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 20:45:20.728100 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 20:45:20.728161 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 20:45:20.740918 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 20:45:20.897167 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 20:45:20.902843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 20:45:20.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.909340 systemd[1]: systemd-udevd.service: Consumed 7.027s CPU time. Apr 20 20:45:20.921942 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 20:45:20.930056 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 20:45:20.953166 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 20:45:20.954285 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 20:45:20.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.966120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 20:45:20.967178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 20:45:20.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.978079 systemd[1]: dracut-cmdline.service: Consumed 1.645s CPU time. Apr 20 20:45:20.980881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 20:45:21.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:20.981054 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 20:45:21.051003 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 20:45:21.058275 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 20:45:21.059201 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 20:45:21.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.060439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 20:45:21.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.060481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 20:45:21.072014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 20:45:21.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.072237 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 20:45:21.095274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 20:45:21.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.095704 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 20:45:21.118020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 20:45:21.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.118209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 20:45:21.150326 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 20:45:21.169028 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 20:45:21.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.272514 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 20:45:21.272790 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 20:45:21.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:21.287700 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 20:45:21.306317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 20:45:21.488201 systemd[1]: Switching root. Apr 20 20:45:21.767872 systemd-journald[317]: Journal stopped Apr 20 20:45:33.878871 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Apr 20 20:45:33.879022 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 20:45:33.879041 kernel: SELinux: policy capability open_perms=1 Apr 20 20:45:33.879074 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 20:45:33.879083 kernel: SELinux: policy capability always_check_network=0 Apr 20 20:45:33.879116 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 20:45:33.879140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 20:45:33.879151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 20:45:33.879162 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 20:45:33.879171 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 20:45:33.879180 systemd[1]: Successfully loaded SELinux policy in 502.542ms. Apr 20 20:45:33.879211 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 70.777ms. Apr 20 20:45:33.879237 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 20:45:33.879261 systemd[1]: Detected virtualization kvm. Apr 20 20:45:33.879270 systemd[1]: Detected architecture x86-64. Apr 20 20:45:33.879279 systemd[1]: Detected first boot. Apr 20 20:45:33.879288 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 20:45:33.879363 kernel: kauditd_printk_skb: 33 callbacks suppressed Apr 20 20:45:33.879372 kernel: audit: type=1334 audit(1776717923.975:83): prog-id=9 op=LOAD Apr 20 20:45:33.879382 kernel: audit: type=1334 audit(1776717923.977:84): prog-id=9 op=UNLOAD Apr 20 20:45:33.879390 zram_generator::config[1259]: No configuration found. Apr 20 20:45:33.879401 kernel: Guest personality initialized and is inactive Apr 20 20:45:33.879442 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 20:45:33.879451 kernel: Initialized host personality Apr 20 20:45:33.879458 kernel: NET: Registered PF_VSOCK protocol family Apr 20 20:45:33.879482 systemd-ssh-generator[1255]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 20:45:33.879508 (sd-exec-[1240]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 20:45:33.879529 systemd[1]: Applying preset policy. Apr 20 20:45:33.879555 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 20:45:33.879565 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 20:45:33.879573 systemd[1]: Populated /etc with preset unit settings. Apr 20 20:45:33.879582 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 20:45:33.879590 kernel: audit: type=1334 audit(1776717931.502:85): prog-id=10 op=LOAD Apr 20 20:45:33.879598 kernel: audit: type=1334 audit(1776717931.502:86): prog-id=2 op=UNLOAD Apr 20 20:45:33.879606 kernel: audit: type=1334 audit(1776717931.502:87): prog-id=11 op=LOAD Apr 20 20:45:33.879663 kernel: audit: type=1334 audit(1776717931.502:88): prog-id=12 op=LOAD Apr 20 20:45:33.879673 kernel: audit: type=1334 audit(1776717931.502:89): prog-id=3 op=UNLOAD Apr 20 20:45:33.879693 kernel: audit: type=1334 audit(1776717931.502:90): prog-id=4 op=UNLOAD Apr 20 20:45:33.879703 kernel: audit: type=1131 audit(1776717931.511:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.879712 kernel: audit: type=1334 audit(1776717931.520:92): prog-id=10 op=UNLOAD Apr 20 20:45:33.880296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 20:45:33.880350 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 20:45:33.880504 kernel: audit: type=1130 audit(1776717931.646:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.880810 kernel: audit: type=1131 audit(1776717931.646:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.880824 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 20:45:33.880834 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 20:45:33.880870 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 20:45:33.880896 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 20:45:33.880905 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 20:45:33.880913 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 20:45:33.880923 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 20:45:33.880932 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 20:45:33.880941 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 20:45:33.880966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 20:45:33.880988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 20:45:33.880998 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 20:45:33.881006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 20:45:33.881015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 20:45:33.881024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 20:45:33.881032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 20:45:33.881057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 20:45:33.881067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 20:45:33.881075 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 20:45:33.881084 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 20:45:33.881092 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 20:45:33.881101 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 20:45:33.881110 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 20:45:33.881135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 20:45:33.881145 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 20:45:33.881153 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 20:45:33.881162 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 20:45:33.881171 systemd[1]: Reached target slices.target - Slice Units. Apr 20 20:45:33.881180 systemd[1]: Reached target swap.target - Swaps. Apr 20 20:45:33.881190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 20:45:33.902280 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 20:45:33.927294 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 20:45:33.927661 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 20:45:33.927682 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 20:45:33.927695 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 20:45:33.927707 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 20:45:33.927720 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 20:45:33.927734 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 20:45:33.927802 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 20:45:33.927817 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 20:45:33.927830 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 20:45:33.927843 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 20:45:33.927855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 20:45:33.927867 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 20:45:33.927903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 20:45:33.927936 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 20:45:33.927945 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 20:45:33.927953 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 20:45:33.927975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 20:45:33.927985 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 20:45:33.927994 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 20:45:33.928017 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 20:45:33.928027 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 20:45:33.928039 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 20:45:33.928052 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 20:45:33.928066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 20:45:33.928107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 20:45:33.928146 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 20:45:33.928160 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 20:45:33.928174 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 20:45:33.928186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 20:45:33.929393 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 20:45:33.929521 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 20:45:33.929537 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 20:45:33.929573 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 20:45:33.929588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 20:45:33.929600 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 20:45:33.929613 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 20:45:33.931243 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 20:45:33.931292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 20:45:33.931303 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 20:45:33.931312 kernel: fuse: init (API version 7.41) Apr 20 20:45:33.931338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 20:45:33.931347 kernel: ACPI: bus type drm_connector registered Apr 20 20:45:33.931358 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 20:45:33.931368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 20:45:33.931391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 20:45:33.931401 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 20:45:33.931410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 20:45:33.931418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 20:45:33.931447 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 20:45:33.931525 systemd-journald[1325]: Collecting audit messages is enabled. Apr 20 20:45:33.934581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 20:45:33.934705 systemd-journald[1325]: Journal started Apr 20 20:45:33.934739 systemd-journald[1325]: Runtime Journal (/run/log/journal/4d3f8bcfb51f485f86a595dcac9be9f8) is 6M, max 48.1M, 42.1M free. Apr 20 20:45:32.577000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 20:45:33.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.732000 audit: BPF prog-id=12 op=UNLOAD Apr 20 20:45:33.732000 audit: BPF prog-id=11 op=UNLOAD Apr 20 20:45:33.734000 audit: BPF prog-id=13 op=LOAD Apr 20 20:45:33.735000 audit: BPF prog-id=14 op=LOAD Apr 20 20:45:33.735000 audit: BPF prog-id=15 op=LOAD Apr 20 20:45:33.876000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 20:45:33.876000 audit[1325]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff53172230 a2=4000 a3=0 items=0 ppid=1 pid=1325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:45:33.876000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 20:45:31.478785 systemd[1]: Queued start job for default target multi-user.target. Apr 20 20:45:31.505317 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 20:45:33.936723 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 20:45:31.511111 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 20:45:31.518093 systemd[1]: systemd-journald.service: Consumed 1.832s CPU time. Apr 20 20:45:33.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.941206 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 20:45:33.949221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 20:45:33.951939 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 20:45:33.955793 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 20:45:33.965866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 20:45:33.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.974258 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 20:45:33.977125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 20:45:33.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:33.983941 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 20:45:34.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.011075 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 20:45:34.013026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 20:45:34.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.019529 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 20:45:34.019839 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 20:45:34.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.028271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 20:45:34.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.040339 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 20:45:34.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.059201 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 20:45:34.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.067527 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 20:45:34.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.157959 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 20:45:34.163792 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 20:45:34.172760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 20:45:34.184056 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 20:45:34.187189 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 20:45:34.187276 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 20:45:34.216234 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 20:45:34.221015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 20:45:34.224881 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 20:45:34.247345 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 20:45:34.259297 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 20:45:34.266293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 20:45:34.269417 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 20:45:34.280125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 20:45:34.289077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 20:45:34.306735 systemd-journald[1325]: Time spent on flushing to /var/log/journal/4d3f8bcfb51f485f86a595dcac9be9f8 is 135.752ms for 1226 entries. Apr 20 20:45:34.306735 systemd-journald[1325]: System Journal (/var/log/journal/4d3f8bcfb51f485f86a595dcac9be9f8) is 8M, max 163.5M, 155.5M free. Apr 20 20:45:34.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.298972 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 20:45:34.474914 systemd-journald[1325]: Received client request to flush runtime journal. Apr 20 20:45:34.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.308872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 20:45:34.478020 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 20:45:34.315222 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 20:45:34.478190 kernel: loop4: p1 p2 p3 Apr 20 20:45:34.321402 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 20:45:34.341748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 20:45:34.344196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 20:45:34.358196 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 20:45:34.378210 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 20:45:34.419187 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 20:45:34.461171 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Apr 20 20:45:34.461186 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Apr 20 20:45:34.472059 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 20:45:34.483115 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 20:45:34.490147 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:34.495229 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:34.493171 systemd-confext[1379]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 20:45:34.496274 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:34.496302 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:34.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.510804 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:34.517046 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 20:45:34.529463 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 20:45:34.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.888101 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 20:45:34.919349 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 20:45:34.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:34.938000 audit: BPF prog-id=16 op=LOAD Apr 20 20:45:34.938000 audit: BPF prog-id=17 op=LOAD Apr 20 20:45:34.938000 audit: BPF prog-id=18 op=LOAD Apr 20 20:45:34.946736 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 20:45:34.954000 audit: BPF prog-id=19 op=LOAD Apr 20 20:45:34.959140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 20:45:34.968000 audit: BPF prog-id=20 op=LOAD Apr 20 20:45:34.975942 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 20:45:34.983897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 20:45:35.024904 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 20:45:35.031000 audit: BPF prog-id=21 op=LOAD Apr 20 20:45:35.031000 audit: BPF prog-id=22 op=LOAD Apr 20 20:45:35.031000 audit: BPF prog-id=23 op=LOAD Apr 20 20:45:35.040058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 20:45:35.068027 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 20:45:35.071533 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 20:45:35.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.071947 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 20:45:35.082000 audit: BPF prog-id=24 op=LOAD Apr 20 20:45:35.072409 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 20 20:45:35.082000 audit: BPF prog-id=25 op=LOAD Apr 20 20:45:35.082000 audit: BPF prog-id=26 op=LOAD Apr 20 20:45:35.072419 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 20 20:45:35.090093 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 20:45:35.113725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 20:45:35.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.190826 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 20:45:35.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.273063 systemd-nsresourced[1410]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 20:45:35.280394 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 20:45:35.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.439961 systemd-oomd[1402]: No swap; memory pressure usage will be degraded Apr 20 20:45:35.446330 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 20:45:35.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.461817 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 20:45:35.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.478956 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 20:45:35.517277 systemd-resolved[1403]: Positive Trust Anchors: Apr 20 20:45:35.518082 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 20:45:35.518091 systemd-resolved[1403]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 20:45:35.519025 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 20:45:35.548152 systemd-resolved[1403]: Defaulting to hostname 'linux'. Apr 20 20:45:35.562896 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 20:45:35.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:35.566771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 20:45:38.319208 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 20:45:38.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:38.336877 kernel: kauditd_printk_skb: 51 callbacks suppressed Apr 20 20:45:38.336989 kernel: audit: type=1130 audit(1776717938.329:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:38.337000 audit: BPF prog-id=7 op=UNLOAD Apr 20 20:45:38.337000 audit: BPF prog-id=6 op=UNLOAD Apr 20 20:45:38.337000 audit: BPF prog-id=27 op=LOAD Apr 20 20:45:38.341154 kernel: audit: type=1334 audit(1776717938.337:145): prog-id=7 op=UNLOAD Apr 20 20:45:38.341088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 20:45:38.341282 kernel: audit: type=1334 audit(1776717938.337:146): prog-id=6 op=UNLOAD Apr 20 20:45:38.341302 kernel: audit: type=1334 audit(1776717938.337:147): prog-id=27 op=LOAD Apr 20 20:45:38.337000 audit: BPF prog-id=28 op=LOAD Apr 20 20:45:38.344819 kernel: audit: type=1334 audit(1776717938.337:148): prog-id=28 op=LOAD Apr 20 20:45:38.759592 systemd-udevd[1432]: Using default interface naming scheme 'v258'. Apr 20 20:45:39.191240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 20:45:39.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.207794 kernel: audit: type=1130 audit(1776717939.199:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.208000 audit: BPF prog-id=29 op=LOAD Apr 20 20:45:39.213263 kernel: audit: type=1334 audit(1776717939.208:150): prog-id=29 op=LOAD Apr 20 20:45:39.222148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 20:45:39.409078 systemd-networkd[1434]: lo: Link UP Apr 20 20:45:39.409086 systemd-networkd[1434]: lo: Gained carrier Apr 20 20:45:39.409819 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 20:45:39.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.420218 kernel: audit: type=1130 audit(1776717939.411:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.412313 systemd[1]: Reached target network.target - Network. Apr 20 20:45:39.424384 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 20:45:39.444805 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 20:45:39.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.555073 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 20:45:39.565368 kernel: audit: type=1130 audit(1776717939.557:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:39.600581 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 20:45:39.800491 systemd-networkd[1434]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 20:45:39.800502 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 20:45:39.803128 systemd-networkd[1434]: eth0: Link UP Apr 20 20:45:39.803406 systemd-networkd[1434]: eth0: Gained carrier Apr 20 20:45:39.803428 systemd-networkd[1434]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 20:45:39.821015 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 20:45:39.823430 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Apr 20 20:45:40.445820 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 20:45:40.445859 systemd-timesyncd[1404]: Initial clock synchronization to Mon 2026-04-20 20:45:40.445665 UTC. Apr 20 20:45:40.445913 systemd-resolved[1403]: Clock change detected. Flushing caches. Apr 20 20:45:40.483361 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 20:45:40.506460 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 20:45:40.570652 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 20:45:40.586432 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 20 20:45:40.601573 kernel: ACPI: button: Power Button [PWRF] Apr 20 20:45:40.643519 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 20:45:40.651020 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 20:45:40.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:40.789645 kernel: audit: type=1130 audit(1776717940.774:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:40.763104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 20:45:41.173383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 20:45:41.417832 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 20:45:41.546536 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 20:45:41.551318 kernel: loop4: p1 p2 p3 Apr 20 20:45:41.620988 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:41.620735 (sd-merge)[1496]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 20:45:41.625941 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:41.626045 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:41.626063 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:41.626079 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:41.706325 systemd-networkd[1434]: eth0: Gained IPv6LL Apr 20 20:45:41.767833 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 20:45:41.771320 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 20:45:41.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:41.773935 (sd-merge)[1496]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 20:45:41.793865 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 20:45:41.799126 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 20:45:41.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:41.800043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 20:45:41.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:41.829601 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 20:45:41.841846 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 20:45:41.933436 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 20:45:41.939584 kernel: loop4: p1 p2 p3 Apr 20 20:45:42.160804 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:42.156712 systemd-sysext[1507]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 20:45:42.161861 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:42.161886 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:42.161923 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:42.188850 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:42.351511 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 20:45:42.497259 kernel: loop4: detected capacity change from 0 to 217752 Apr 20 20:45:42.599589 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 20:45:42.617742 kernel: loop4: p1 p2 p3 Apr 20 20:45:42.703403 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:42.731096 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:42.729747 systemd-sysext[1507]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 20:45:42.733896 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:42.733914 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:42.738554 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:43.100834 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 20:45:43.301646 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 20:45:43.354960 kernel: loop4: p1 p2 p3 Apr 20 20:45:43.400735 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:43.402450 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:43.402538 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:43.407564 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:43.414791 (sd-merge)[1527]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 20:45:43.424469 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:43.573341 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 20:45:43.584475 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 20:45:43.701192 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 20:45:43.706443 kernel: loop6: p1 p2 p3 Apr 20 20:45:43.818489 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:43.823376 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 20:45:43.821105 (sd-merge)[1527]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 20:45:43.823604 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 20:45:43.823624 kernel: device-mapper: ioctl: error adding target to table Apr 20 20:45:43.827544 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 20:45:44.201911 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 20:45:44.231413 (sd-merge)[1527]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 20:45:44.238325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 20:45:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:44.242390 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 20 20:45:44.242713 kernel: audit: type=1130 audit(1776717944.239:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:44.280521 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 20:45:44.284717 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 20:45:44.284648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 20:45:44.666879 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 20:45:44.669401 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 20:45:44.669838 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 20:45:44.692813 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 20 20:45:44.697547 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 20 20:45:44.858805 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 20:45:44.858838 systemd-tmpfiles[1544]: Skipping /boot Apr 20 20:45:44.928941 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 20:45:44.931800 systemd-tmpfiles[1544]: Skipping /boot Apr 20 20:45:44.993940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 20:45:45.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.070524 kernel: audit: type=1130 audit(1776717945.060:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.087104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 20:45:45.090271 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 20:45:45.097646 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 20:45:45.114519 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 20:45:45.121346 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 20:45:45.148000 audit[1554]: AUDIT1127 pid=1554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.154169 kernel: audit: type=1127 audit(1776717945.148:159): pid=1554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.156648 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 20:45:45.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.168740 kernel: audit: type=1130 audit(1776717945.160:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.219523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 20:45:45.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.229622 kernel: audit: type=1130 audit(1776717945.222:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:45:45.236000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 20:45:45.241766 augenrules[1576]: No rules Apr 20 20:45:45.236000 audit[1576]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc980a55f0 a2=420 a3=0 items=0 ppid=1550 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:45:45.236000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 20:45:45.249463 kernel: audit: type=1305 audit(1776717945.236:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 20:45:45.244681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 20:45:45.249903 kernel: audit: type=1300 audit(1776717945.236:162): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc980a55f0 a2=420 a3=0 items=0 ppid=1550 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:45:45.249926 kernel: audit: type=1327 audit(1776717945.236:162): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 20:45:45.260203 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 20:45:45.260606 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 20:45:45.268855 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 20:45:46.601838 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 20:45:46.657752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 20:45:46.677743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 20:45:46.926963 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 20:45:46.941401 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 20:45:46.957420 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 20:45:46.963317 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 20:45:46.968043 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 20:45:46.976202 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 20:45:46.981963 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 20:45:46.986579 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 20:45:46.990660 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 20:45:46.992442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 20:45:46.994469 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 20:45:46.994520 systemd[1]: Reached target paths.target - Path Units. Apr 20 20:45:46.995983 systemd[1]: Reached target timers.target - Timer Units. Apr 20 20:45:47.003514 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 20:45:47.017668 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 20:45:47.055441 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 20:45:47.105517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 20:45:47.178737 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 20:45:47.196203 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 20:45:47.207549 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 20:45:47.216757 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 20:45:47.235413 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 20:45:47.241934 systemd[1]: Reached target basic.target - Basic System. Apr 20 20:45:47.245365 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 20:45:47.245456 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 20:45:47.264890 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 20:45:47.268641 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 20:45:47.295875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 20:45:47.371727 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 20:45:47.389065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 20:45:47.424905 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 20:45:47.428747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 20:45:47.430077 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 20:45:47.441919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:45:47.443911 jq[1592]: false Apr 20 20:45:47.451620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 20:45:47.469932 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Apr 20 20:45:47.469920 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Apr 20 20:45:47.470685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 20:45:47.479798 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 20:45:47.490168 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Apr 20 20:45:47.490168 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 20:45:47.490168 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Apr 20 20:45:47.490277 extend-filesystems[1593]: Found /dev/vda6 Apr 20 20:45:47.489674 oslogin_cache_refresh[1594]: Failure getting users, quitting Apr 20 20:45:47.500678 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Apr 20 20:45:47.500678 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 20:45:47.500708 extend-filesystems[1593]: Found /dev/vda9 Apr 20 20:45:47.489692 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 20:45:47.489772 oslogin_cache_refresh[1594]: Refreshing group entry cache Apr 20 20:45:47.499390 oslogin_cache_refresh[1594]: Failure getting groups, quitting Apr 20 20:45:47.499409 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 20:45:47.502444 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 20:45:47.552906 extend-filesystems[1593]: Checking size of /dev/vda9 Apr 20 20:45:47.565664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 20:45:47.588294 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 20:45:47.590624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 20:45:47.602771 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 20:45:47.644780 extend-filesystems[1593]: Resized partition /dev/vda9 Apr 20 20:45:47.670547 extend-filesystems[1624]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 20:45:47.701697 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 20:45:47.650868 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 20:45:47.696862 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 20:45:47.792909 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 20:45:47.845268 extend-filesystems[1624]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 20:45:47.845268 extend-filesystems[1624]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 20:45:47.845268 extend-filesystems[1624]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 20:45:47.705474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 20:45:47.894044 jq[1627]: true Apr 20 20:45:47.900060 extend-filesystems[1593]: Resized filesystem in /dev/vda9 Apr 20 20:45:47.707484 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 20:45:47.707833 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 20:45:47.709463 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 20:45:47.717801 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 20:45:47.718120 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 20:45:47.729922 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 20:45:47.786276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 20:45:47.795791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 20:45:47.801816 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 20:45:47.803548 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 20:45:47.926877 update_engine[1623]: I20260420 20:45:47.921674 1623 main.cc:92] Flatcar Update Engine starting Apr 20 20:45:47.974780 jq[1647]: true Apr 20 20:45:48.062788 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 20:45:48.068827 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 20:45:48.307886 systemd-logind[1620]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 20:45:48.307908 systemd-logind[1620]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 20:45:48.329746 tar[1640]: linux-amd64/LICENSE Apr 20 20:45:48.329746 tar[1640]: linux-amd64/helm Apr 20 20:45:48.329608 systemd-logind[1620]: New seat seat0. Apr 20 20:45:48.350988 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 20:45:48.368334 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 20:45:48.405378 dbus-daemon[1590]: [system] SELinux support is enabled Apr 20 20:45:48.408101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 20:45:48.420935 update_engine[1623]: I20260420 20:45:48.420615 1623 update_check_scheduler.cc:74] Next update check in 7m25s Apr 20 20:45:48.430183 dbus-daemon[1590]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 20:45:48.439363 systemd[1]: Started update-engine.service - Update Engine. Apr 20 20:45:48.489334 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 20:45:48.495448 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 20:45:48.539860 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 20:45:48.540214 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 20:45:48.577604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 20:45:48.594089 bash[1696]: Updated "/home/core/.ssh/authorized_keys" Apr 20 20:45:48.641970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 20:45:48.688411 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 20:45:49.001538 locksmithd[1697]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 20:45:49.050988 containerd[1648]: time="2026-04-20T20:45:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 20:45:49.050988 containerd[1648]: time="2026-04-20T20:45:49.049274735Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 20:45:49.092486 containerd[1648]: time="2026-04-20T20:45:49.091957843Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.107µs" Apr 20 20:45:49.092486 containerd[1648]: time="2026-04-20T20:45:49.092067407Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.095993717Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096121790Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096540872Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096587221Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096597864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096651451Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.096663424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.097848390Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.097905456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.097931817Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.097938761Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.111876 containerd[1648]: time="2026-04-20T20:45:49.112425248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 20:45:49.120657 containerd[1648]: time="2026-04-20T20:45:49.112900999Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.124443 containerd[1648]: time="2026-04-20T20:45:49.123828532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.124443 containerd[1648]: time="2026-04-20T20:45:49.123925441Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 20:45:49.124443 containerd[1648]: time="2026-04-20T20:45:49.123936427Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 20:45:49.126397 containerd[1648]: time="2026-04-20T20:45:49.125552569Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 20:45:49.134550 containerd[1648]: time="2026-04-20T20:45:49.134467059Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 20:45:49.134550 containerd[1648]: time="2026-04-20T20:45:49.134574911Z" level=info msg="metadata content store policy set" policy=shared Apr 20 20:45:49.158670 containerd[1648]: time="2026-04-20T20:45:49.158582514Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 20:45:49.160980 containerd[1648]: time="2026-04-20T20:45:49.160434604Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 20:45:49.160980 containerd[1648]: time="2026-04-20T20:45:49.160502701Z" level=info msg="built-in NRI default validator is disabled" Apr 20 20:45:49.160980 containerd[1648]: time="2026-04-20T20:45:49.160509184Z" level=info msg="runtime interface created" Apr 20 20:45:49.160980 containerd[1648]: time="2026-04-20T20:45:49.160513056Z" level=info msg="created NRI interface" Apr 20 20:45:49.160980 containerd[1648]: time="2026-04-20T20:45:49.160520697Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 20:45:49.161119 sshd_keygen[1628]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 20:45:49.173500 containerd[1648]: time="2026-04-20T20:45:49.171562868Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175471469Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175583944Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175603819Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175880613Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175955046Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175971908Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.175992716Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.176039917Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.176056030Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.176067941Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.176081724Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 20:45:49.178659 containerd[1648]: time="2026-04-20T20:45:49.176097041Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 20:45:49.179217 containerd[1648]: time="2026-04-20T20:45:49.177090776Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 20:45:49.179217 containerd[1648]: time="2026-04-20T20:45:49.179113977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179259833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179332494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179344475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179358117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179369430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179380294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179430606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179446607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179459120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 20:45:49.179847 containerd[1648]: time="2026-04-20T20:45:49.179468602Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 20:45:49.183904 containerd[1648]: time="2026-04-20T20:45:49.183046530Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 20:45:49.183904 containerd[1648]: time="2026-04-20T20:45:49.183629402Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 20:45:49.183904 containerd[1648]: time="2026-04-20T20:45:49.183655749Z" level=info msg="Start snapshots syncer" Apr 20 20:45:49.184266 containerd[1648]: time="2026-04-20T20:45:49.184204970Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 20:45:49.187543 containerd[1648]: time="2026-04-20T20:45:49.186637924Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.190930554Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191267001Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191409754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191439505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191455267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191467219Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191484794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191529333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191546794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191560951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191574430Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191604669Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191622121Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 20:45:49.199397 containerd[1648]: time="2026-04-20T20:45:49.191631710Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191641159Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191649462Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191659038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191678098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191697535Z" level=info msg="Connect containerd service" Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.191732101Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 20:45:49.258754 containerd[1648]: time="2026-04-20T20:45:49.197193899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 20:45:49.328895 tar[1640]: linux-amd64/README.md Apr 20 20:45:49.341949 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 20:45:49.519056 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 20:45:49.561658 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 20:45:49.670752 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 20:45:49.685283 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 20:45:49.716807 containerd[1648]: time="2026-04-20T20:45:49.716700059Z" level=info msg="Start subscribing containerd event" Apr 20 20:45:49.717644 containerd[1648]: time="2026-04-20T20:45:49.716868036Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 20:45:49.717644 containerd[1648]: time="2026-04-20T20:45:49.717582943Z" level=info msg="Start recovering state" Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718086203Z" level=info msg="Start event monitor" Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718101942Z" level=info msg="Start cni network conf syncer for default" Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718114185Z" level=info msg="Start streaming server" Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718123050Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718168972Z" level=info msg="runtime interface starting up..." Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718179549Z" level=info msg="starting plugins..." Apr 20 20:45:49.718298 containerd[1648]: time="2026-04-20T20:45:49.718197394Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 20:45:49.718480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 20:45:49.724309 containerd[1648]: time="2026-04-20T20:45:49.720661133Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 20:45:49.739967 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 20:45:49.755762 containerd[1648]: time="2026-04-20T20:45:49.740049807Z" level=info msg="containerd successfully booted in 0.694327s" Apr 20 20:45:49.796352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 20:45:49.936974 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 20:45:49.942273 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 20:45:49.948032 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 20:45:51.103539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:45:51.116302 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 20:45:51.120294 systemd[1]: Startup finished in 7.195s (kernel) + 44.171s (initrd) + 28.445s (userspace) = 1min 19.813s. Apr 20 20:45:51.198862 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:45:51.443311 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 20:45:51.445633 systemd[1]: Started sshd@0-1-10.0.0.6:22-10.0.0.1:53546.service - OpenSSH per-connection server daemon (10.0.0.1:53546). Apr 20 20:45:51.872626 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 53546 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 20:45:51.875679 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:45:52.312412 systemd-logind[1620]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 20:45:52.313639 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 20:45:52.314992 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 20:45:52.460448 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 20:45:52.465488 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 20:45:52.603678 (systemd)[1767]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:45:52.640999 systemd-logind[1620]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 20:45:52.672802 kubelet[1749]: E0420 20:45:52.670727 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:45:52.702834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:45:52.704294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:45:52.705176 systemd[1]: kubelet.service: Consumed 1.980s CPU time, 256.1M memory peak. Apr 20 20:45:55.416787 systemd[1767]: Queued start job for default target default.target. Apr 20 20:45:55.430319 systemd[1767]: Created slice app.slice - User Application Slice. Apr 20 20:45:55.430437 systemd[1767]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 20:45:55.430456 systemd[1767]: Reached target machines.target - Virtual Machines and Containers. Apr 20 20:45:55.430532 systemd[1767]: Reached target paths.target - Paths. Apr 20 20:45:55.430557 systemd[1767]: Reached target timers.target - Timers. Apr 20 20:45:55.474780 systemd[1767]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 20:45:55.548622 systemd[1767]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 20:45:55.558962 systemd[1767]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 20:45:55.679086 systemd[1767]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 20:45:55.681013 systemd[1767]: Reached target sockets.target - Sockets. Apr 20 20:45:55.779003 systemd[1767]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 20:45:55.785070 systemd[1767]: Reached target basic.target - Basic System. Apr 20 20:45:55.787897 systemd[1767]: Reached target default.target - Main User Target. Apr 20 20:45:55.787961 systemd[1767]: Startup finished in 3.081s. Apr 20 20:45:55.788013 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 20:45:55.877353 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 20:45:56.105897 systemd[1]: Started sshd@1-4097-10.0.0.6:22-10.0.0.1:56266.service - OpenSSH per-connection server daemon (10.0.0.1:56266). Apr 20 20:45:56.667339 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 56266 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 20:45:56.674204 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:45:56.728539 systemd-logind[1620]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 20:45:56.746678 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 20:45:56.816976 sshd[1787]: Connection closed by 10.0.0.1 port 56266 Apr 20 20:45:56.818971 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Apr 20 20:45:56.938943 systemd[1]: sshd@1-4097-10.0.0.6:22-10.0.0.1:56266.service: Deactivated successfully. Apr 20 20:45:56.956122 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 20:45:56.972760 systemd-logind[1620]: Session 3 logged out. Waiting for processes to exit. Apr 20 20:45:56.983629 systemd[1]: Started sshd@2-2-10.0.0.6:22-10.0.0.1:56288.service - OpenSSH per-connection server daemon (10.0.0.1:56288). Apr 20 20:45:57.021593 systemd-logind[1620]: Removed session 3. Apr 20 20:45:57.588468 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 56288 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 20:45:57.605936 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:45:57.833793 systemd-logind[1620]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 20:45:57.964251 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 20:45:58.278700 sshd[1797]: Connection closed by 10.0.0.1 port 56288 Apr 20 20:45:58.280482 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Apr 20 20:45:58.380718 systemd[1]: sshd@2-2-10.0.0.6:22-10.0.0.1:56288.service: Deactivated successfully. Apr 20 20:45:58.394541 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 20:45:58.445646 systemd-logind[1620]: Session 4 logged out. Waiting for processes to exit. Apr 20 20:45:58.457226 systemd-logind[1620]: Removed session 4. Apr 20 20:45:58.468599 systemd[1]: Started sshd@3-4098-10.0.0.6:22-10.0.0.1:56292.service - OpenSSH per-connection server daemon (10.0.0.1:56292). Apr 20 20:45:58.888744 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 56292 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 20:45:58.902455 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:45:59.045915 systemd-logind[1620]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 20:45:59.078774 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 20:45:59.543714 sshd[1807]: Connection closed by 10.0.0.1 port 56292 Apr 20 20:45:59.544682 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Apr 20 20:45:59.595710 systemd[1]: sshd@3-4098-10.0.0.6:22-10.0.0.1:56292.service: Deactivated successfully. Apr 20 20:45:59.695264 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 20:45:59.702479 systemd-logind[1620]: Session 5 logged out. Waiting for processes to exit. Apr 20 20:45:59.750553 systemd[1]: Started sshd@4-8193-10.0.0.6:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). Apr 20 20:45:59.751602 systemd-logind[1620]: Removed session 5. Apr 20 20:46:00.116691 sshd[1813]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 20:46:00.134752 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:46:00.329596 systemd-logind[1620]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 20:46:00.373963 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 20:46:01.017422 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 20:46:01.017959 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 20:46:02.730421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 20:46:02.855438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:46:03.799862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:46:03.969315 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:46:04.001644 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 20:46:04.032904 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 20:46:04.234528 kubelet[1847]: E0420 20:46:04.233876 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:46:04.244039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:46:04.244232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:46:04.244803 systemd[1]: kubelet.service: Consumed 894ms CPU time, 111.7M memory peak. Apr 20 20:46:05.664669 dockerd[1854]: time="2026-04-20T20:46:05.664108514Z" level=info msg="Starting up" Apr 20 20:46:05.756021 dockerd[1854]: time="2026-04-20T20:46:05.752310643Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 20:46:05.932050 dockerd[1854]: time="2026-04-20T20:46:05.926302704Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 20:46:06.266985 dockerd[1854]: time="2026-04-20T20:46:06.266651319Z" level=info msg="Loading containers: start." Apr 20 20:46:06.328342 kernel: Initializing XFRM netlink socket Apr 20 20:46:09.485529 systemd-networkd[1434]: docker0: Link UP Apr 20 20:46:09.501667 dockerd[1854]: time="2026-04-20T20:46:09.501250722Z" level=info msg="Loading containers: done." Apr 20 20:46:09.619660 dockerd[1854]: time="2026-04-20T20:46:09.619078673Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 20:46:09.619660 dockerd[1854]: time="2026-04-20T20:46:09.619636741Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 20:46:09.627676 dockerd[1854]: time="2026-04-20T20:46:09.619817708Z" level=info msg="Initializing buildkit" Apr 20 20:46:09.651747 dockerd[1854]: time="2026-04-20T20:46:09.650994505Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 20:46:09.651747 dockerd[1854]: time="2026-04-20T20:46:09.651048327Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 20:46:10.171661 dockerd[1854]: time="2026-04-20T20:46:10.169912946Z" level=info msg="Completed buildkit initialization" Apr 20 20:46:10.194532 dockerd[1854]: time="2026-04-20T20:46:10.191837540Z" level=info msg="Daemon has completed initialization" Apr 20 20:46:10.194532 dockerd[1854]: time="2026-04-20T20:46:10.192921772Z" level=info msg="API listen on /run/docker.sock" Apr 20 20:46:10.235862 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 20:46:11.875199 containerd[1648]: time="2026-04-20T20:46:11.874744170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 20 20:46:13.378212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966846248.mount: Deactivated successfully. Apr 20 20:46:14.488682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 20:46:14.504427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:46:15.579860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:46:15.625210 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:46:16.549901 kubelet[2131]: E0420 20:46:16.544913 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:46:16.563437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:46:16.563560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:46:16.592890 systemd[1]: kubelet.service: Consumed 2.182s CPU time, 114.9M memory peak. Apr 20 20:46:18.859165 containerd[1648]: time="2026-04-20T20:46:18.803806267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:19.055342 containerd[1648]: time="2026-04-20T20:46:18.903604896Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=26910354" Apr 20 20:46:19.055342 containerd[1648]: time="2026-04-20T20:46:18.917463977Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:19.066461 containerd[1648]: time="2026-04-20T20:46:19.066052770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:19.067865 containerd[1648]: time="2026-04-20T20:46:19.067766453Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 7.192795583s" Apr 20 20:46:19.067865 containerd[1648]: time="2026-04-20T20:46:19.067850762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 20 20:46:19.072976 containerd[1648]: time="2026-04-20T20:46:19.071676798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 20 20:46:24.162896 containerd[1648]: time="2026-04-20T20:46:24.162457372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:24.164443 containerd[1648]: time="2026-04-20T20:46:24.163193782Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=1, bytes read=20119552" Apr 20 20:46:24.164577 containerd[1648]: time="2026-04-20T20:46:24.164543554Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:24.204991 containerd[1648]: time="2026-04-20T20:46:24.204100399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:24.228478 containerd[1648]: time="2026-04-20T20:46:24.222621777Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 5.150129206s" Apr 20 20:46:24.228478 containerd[1648]: time="2026-04-20T20:46:24.222732743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 20 20:46:24.228478 containerd[1648]: time="2026-04-20T20:46:24.225708719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 20 20:46:26.848015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 20:46:26.862226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:46:28.323957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:46:28.425288 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:46:29.378929 containerd[1648]: time="2026-04-20T20:46:29.377840608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:29.396579 containerd[1648]: time="2026-04-20T20:46:29.386781249Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=1, bytes read=14241792" Apr 20 20:46:29.396579 containerd[1648]: time="2026-04-20T20:46:29.394462371Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:29.442985 containerd[1648]: time="2026-04-20T20:46:29.442657916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:29.449493 containerd[1648]: time="2026-04-20T20:46:29.443595845Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 5.217849817s" Apr 20 20:46:29.449493 containerd[1648]: time="2026-04-20T20:46:29.443623114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 20 20:46:29.449493 containerd[1648]: time="2026-04-20T20:46:29.444338314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 20 20:46:29.915225 kubelet[2163]: E0420 20:46:29.914796 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:46:29.936753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:46:29.940326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:46:29.947843 systemd[1]: kubelet.service: Consumed 2.252s CPU time, 110.2M memory peak. Apr 20 20:46:33.333781 update_engine[1623]: I20260420 20:46:33.329684 1623 update_attempter.cc:509] Updating boot flags... Apr 20 20:46:34.779876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561735729.mount: Deactivated successfully. Apr 20 20:46:37.150395 containerd[1648]: time="2026-04-20T20:46:37.149720127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:37.158429 containerd[1648]: time="2026-04-20T20:46:37.154231127Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=1, bytes read=22347161" Apr 20 20:46:37.158728 containerd[1648]: time="2026-04-20T20:46:37.158658782Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:37.165411 containerd[1648]: time="2026-04-20T20:46:37.164750204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:37.175221 containerd[1648]: time="2026-04-20T20:46:37.174966753Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 7.730594757s" Apr 20 20:46:37.175221 containerd[1648]: time="2026-04-20T20:46:37.174999147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 20 20:46:37.176732 containerd[1648]: time="2026-04-20T20:46:37.175902691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 20 20:46:39.089015 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1132370284 wd_nsec: 1132370263 Apr 20 20:46:39.939569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802159553.mount: Deactivated successfully. Apr 20 20:46:40.067018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 20:46:40.117046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:46:41.693165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:46:41.736890 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:46:43.041205 kubelet[2223]: E0420 20:46:43.040589 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:46:43.060297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:46:43.060552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:46:43.087750 systemd[1]: kubelet.service: Consumed 2.929s CPU time, 112.8M memory peak. Apr 20 20:46:48.318584 containerd[1648]: time="2026-04-20T20:46:48.317769832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:48.329412 containerd[1648]: time="2026-04-20T20:46:48.320787146Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23545053" Apr 20 20:46:48.329412 containerd[1648]: time="2026-04-20T20:46:48.326580222Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:48.336478 containerd[1648]: time="2026-04-20T20:46:48.335921401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:48.337914 containerd[1648]: time="2026-04-20T20:46:48.337790320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 11.161858267s" Apr 20 20:46:48.337968 containerd[1648]: time="2026-04-20T20:46:48.337904480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 20 20:46:48.338603 containerd[1648]: time="2026-04-20T20:46:48.338582594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 20:46:49.763016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49078099.mount: Deactivated successfully. Apr 20 20:46:49.798371 containerd[1648]: time="2026-04-20T20:46:49.797714405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:46:49.840386 containerd[1648]: time="2026-04-20T20:46:49.799583007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 20:46:49.840386 containerd[1648]: time="2026-04-20T20:46:49.806831060Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:46:49.862703 containerd[1648]: time="2026-04-20T20:46:49.861536097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:46:49.864924 containerd[1648]: time="2026-04-20T20:46:49.864358124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.524552702s" Apr 20 20:46:49.864924 containerd[1648]: time="2026-04-20T20:46:49.864791105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 20:46:49.870023 containerd[1648]: time="2026-04-20T20:46:49.869213013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 20 20:46:52.073603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297049582.mount: Deactivated successfully. Apr 20 20:46:53.267341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 20:46:53.309844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:46:54.661630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:46:54.707732 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:46:55.375850 kubelet[2299]: E0420 20:46:55.374867 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:46:55.389564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:46:55.393620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:46:55.405413 systemd[1]: kubelet.service: Consumed 1.352s CPU time, 114.2M memory peak. Apr 20 20:46:58.764627 containerd[1648]: time="2026-04-20T20:46:58.763465675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:58.776967 containerd[1648]: time="2026-04-20T20:46:58.769656372Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23634747" Apr 20 20:46:58.778244 containerd[1648]: time="2026-04-20T20:46:58.777507562Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:58.922716 containerd[1648]: time="2026-04-20T20:46:58.921047806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:46:58.948890 containerd[1648]: time="2026-04-20T20:46:58.947680899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 9.078391392s" Apr 20 20:46:58.948890 containerd[1648]: time="2026-04-20T20:46:58.947918915Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 20 20:47:05.593957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 20:47:05.642057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:47:07.002978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:47:07.059601 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:47:07.378012 kubelet[2388]: E0420 20:47:07.376808 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:47:07.388869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:47:07.389059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:47:07.395867 systemd[1]: kubelet.service: Consumed 1.167s CPU time, 110.2M memory peak. Apr 20 20:47:07.578355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:47:07.625983 systemd[1]: kubelet.service: Consumed 1.167s CPU time, 110.2M memory peak. Apr 20 20:47:07.738814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:47:07.952589 systemd[1]: Reload requested from client PID 2405 ('systemctl') (unit session-6.scope)... Apr 20 20:47:07.952662 systemd[1]: Reloading... Apr 20 20:47:09.453839 zram_generator::config[2459]: No configuration found. Apr 20 20:47:09.484441 systemd-ssh-generator[2453]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 20:47:09.485490 (sd-exec-[2436]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 20:47:12.519912 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 20:47:15.802847 systemd[1]: Reloading finished in 7844 ms. Apr 20 20:47:16.381687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:47:16.454932 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:47:16.527416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:47:16.578828 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 20:47:16.607930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:47:16.609052 systemd[1]: kubelet.service: Consumed 659ms CPU time, 100.2M memory peak. Apr 20 20:47:16.886478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:47:19.242914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:47:19.380035 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:47:20.684987 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 20:47:22.371607 kubelet[2514]: I0420 20:47:22.364049 2514 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 20:47:22.371607 kubelet[2514]: I0420 20:47:22.372066 2514 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 20:47:22.386023 kubelet[2514]: I0420 20:47:22.374116 2514 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 20:47:22.386023 kubelet[2514]: I0420 20:47:22.377081 2514 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 20:47:22.386023 kubelet[2514]: I0420 20:47:22.384546 2514 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 20:47:22.562720 kubelet[2514]: I0420 20:47:22.561758 2514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 20:47:22.562720 kubelet[2514]: E0420 20:47:22.561864 2514 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:47:22.745707 kubelet[2514]: I0420 20:47:22.744461 2514 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 20:47:23.101695 kubelet[2514]: I0420 20:47:23.082974 2514 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 20:47:23.160516 kubelet[2514]: I0420 20:47:23.116977 2514 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 20:47:23.160516 kubelet[2514]: I0420 20:47:23.137659 2514 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 20:47:23.160516 kubelet[2514]: I0420 20:47:23.145311 2514 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 20:47:23.160516 kubelet[2514]: I0420 20:47:23.145549 2514 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 20:47:23.186381 kubelet[2514]: I0420 20:47:23.150738 2514 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 20:47:23.186381 kubelet[2514]: I0420 20:47:23.185737 2514 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 20:47:23.189799 kubelet[2514]: I0420 20:47:23.188708 2514 kubelet.go:482] "Attempting to sync node with API server" Apr 20 20:47:23.193184 kubelet[2514]: I0420 20:47:23.190057 2514 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 20:47:23.193184 kubelet[2514]: I0420 20:47:23.190782 2514 kubelet.go:394] "Adding apiserver pod source" Apr 20 20:47:23.193184 kubelet[2514]: I0420 20:47:23.190928 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 20:47:23.245979 kubelet[2514]: I0420 20:47:23.241811 2514 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 20:47:23.279697 kubelet[2514]: I0420 20:47:23.278423 2514 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 20:47:23.279697 kubelet[2514]: I0420 20:47:23.279665 2514 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 20:47:23.299520 kubelet[2514]: W0420 20:47:23.281814 2514 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 20:47:23.381819 kubelet[2514]: I0420 20:47:23.372937 2514 server.go:1257] "Started kubelet" Apr 20 20:47:23.381819 kubelet[2514]: I0420 20:47:23.374111 2514 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 20:47:23.424725 kubelet[2514]: I0420 20:47:23.383071 2514 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 20:47:23.424725 kubelet[2514]: E0420 20:47:23.399876 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a82ba5a126ea40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,LastTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:47:23.424725 kubelet[2514]: I0420 20:47:23.420368 2514 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 20:47:23.424725 kubelet[2514]: I0420 20:47:23.407407 2514 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 20:47:23.424725 kubelet[2514]: I0420 20:47:23.421534 2514 server.go:317] "Adding debug handlers to kubelet server" Apr 20 20:47:23.433442 kubelet[2514]: I0420 20:47:23.432074 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 20:47:23.451691 kubelet[2514]: I0420 20:47:23.449692 2514 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 20:47:23.459639 kubelet[2514]: I0420 20:47:23.451809 2514 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 20:47:23.459639 kubelet[2514]: I0420 20:47:23.453027 2514 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 20:47:23.459639 kubelet[2514]: E0420 20:47:23.455809 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:23.459639 kubelet[2514]: E0420 20:47:23.456866 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 20 20:47:23.459639 kubelet[2514]: I0420 20:47:23.457865 2514 reconciler.go:29] "Reconciler: start to sync state" Apr 20 20:47:23.530727 kubelet[2514]: E0420 20:47:23.530664 2514 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 20:47:23.530855 kubelet[2514]: I0420 20:47:23.530821 2514 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 20:47:23.556912 kubelet[2514]: I0420 20:47:23.554626 2514 factory.go:223] Registration of the containerd container factory successfully Apr 20 20:47:23.556912 kubelet[2514]: I0420 20:47:23.557805 2514 factory.go:223] Registration of the systemd container factory successfully Apr 20 20:47:23.589050 kubelet[2514]: E0420 20:47:23.560691 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:23.665598 kubelet[2514]: E0420 20:47:23.662218 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 20 20:47:23.681567 kubelet[2514]: E0420 20:47:23.681229 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:23.784901 kubelet[2514]: E0420 20:47:23.783660 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:23.787739 kubelet[2514]: I0420 20:47:23.787460 2514 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 20:47:23.795503 kubelet[2514]: I0420 20:47:23.795309 2514 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 20:47:23.795503 kubelet[2514]: I0420 20:47:23.795433 2514 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 20:47:23.795503 kubelet[2514]: I0420 20:47:23.795605 2514 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 20:47:23.797646 kubelet[2514]: E0420 20:47:23.795760 2514 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 20:47:23.840751 kubelet[2514]: I0420 20:47:23.839925 2514 cpu_manager.go:225] "Starting" policy="none" Apr 20 20:47:23.840751 kubelet[2514]: I0420 20:47:23.840204 2514 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 20:47:23.840751 kubelet[2514]: I0420 20:47:23.840249 2514 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 20:47:23.868930 kubelet[2514]: I0420 20:47:23.867918 2514 policy_none.go:50] "Start" Apr 20 20:47:23.876282 kubelet[2514]: I0420 20:47:23.870542 2514 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 20:47:23.876282 kubelet[2514]: I0420 20:47:23.870772 2514 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 20:47:23.886624 kubelet[2514]: E0420 20:47:23.885859 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:23.887953 kubelet[2514]: I0420 20:47:23.887707 2514 policy_none.go:44] "Start" Apr 20 20:47:23.899829 kubelet[2514]: E0420 20:47:23.898329 2514 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:47:23.987999 kubelet[2514]: E0420 20:47:23.987897 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.086938 kubelet[2514]: E0420 20:47:24.086547 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 20 20:47:24.102650 kubelet[2514]: E0420 20:47:24.090100 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.102650 kubelet[2514]: E0420 20:47:24.102364 2514 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:47:24.200580 kubelet[2514]: E0420 20:47:24.199896 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.295937 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 20:47:24.316013 kubelet[2514]: E0420 20:47:24.300695 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.406905 kubelet[2514]: E0420 20:47:24.406545 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.505396 kubelet[2514]: E0420 20:47:24.503371 2514 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:47:24.505396 kubelet[2514]: E0420 20:47:24.510484 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.517067 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 20:47:24.634852 kubelet[2514]: E0420 20:47:24.629282 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.737121 kubelet[2514]: E0420 20:47:24.735971 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.743793 kubelet[2514]: E0420 20:47:24.743097 2514 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:47:24.798205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 20:47:24.851984 kubelet[2514]: E0420 20:47:24.850601 2514 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:47:24.889054 kubelet[2514]: E0420 20:47:24.886478 2514 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 20:47:24.889054 kubelet[2514]: I0420 20:47:24.888858 2514 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 20:47:24.890963 kubelet[2514]: I0420 20:47:24.888970 2514 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 20:47:24.890963 kubelet[2514]: E0420 20:47:24.890832 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 20 20:47:24.891786 kubelet[2514]: I0420 20:47:24.891674 2514 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 20:47:24.938940 kubelet[2514]: E0420 20:47:24.936866 2514 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 20:47:24.943868 kubelet[2514]: E0420 20:47:24.942909 2514 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:47:25.055285 kubelet[2514]: I0420 20:47:25.042937 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:25.066885 kubelet[2514]: E0420 20:47:25.066442 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:25.305949 kubelet[2514]: I0420 20:47:25.305389 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:25.305949 kubelet[2514]: E0420 20:47:25.305775 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:25.449731 kubelet[2514]: I0420 20:47:25.448735 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:47:25.449731 kubelet[2514]: I0420 20:47:25.449809 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:47:25.449731 kubelet[2514]: I0420 20:47:25.449842 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:47:25.568871 kubelet[2514]: I0420 20:47:25.557946 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:47:25.568871 kubelet[2514]: I0420 20:47:25.564950 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:47:25.575849 kubelet[2514]: I0420 20:47:25.575453 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 20:47:25.579399 kubelet[2514]: I0420 20:47:25.579326 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:47:25.579649 kubelet[2514]: I0420 20:47:25.579453 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:47:25.579649 kubelet[2514]: I0420 20:47:25.579557 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:47:25.643179 systemd[1]: Created slice kubepods-burstable-podba15b63dde517d3f49c1db0a4abcdbe1.slice - libcontainer container kubepods-burstable-podba15b63dde517d3f49c1db0a4abcdbe1.slice. Apr 20 20:47:25.718969 kubelet[2514]: E0420 20:47:25.718446 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:25.723715 kubelet[2514]: I0420 20:47:25.721682 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:25.725034 kubelet[2514]: E0420 20:47:25.723980 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:25.753792 kubelet[2514]: E0420 20:47:25.752946 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:25.790175 containerd[1648]: time="2026-04-20T20:47:25.789593441Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"ba15b63dde517d3f49c1db0a4abcdbe1\" namespace:\"kube-system\"" Apr 20 20:47:25.852851 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 20 20:47:25.890787 kubelet[2514]: E0420 20:47:25.889957 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:25.963585 kubelet[2514]: E0420 20:47:25.961446 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:25.974376 containerd[1648]: time="2026-04-20T20:47:25.974224297Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\"" Apr 20 20:47:26.062847 containerd[1648]: time="2026-04-20T20:47:26.060899228Z" level=info msg="connecting to shim 40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2" address="unix:///run/containerd/s/e2b8f62b3513e9df927389710f869d409d09faef7e3b5d805b3779a2442f5e56" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:47:26.234953 containerd[1648]: time="2026-04-20T20:47:26.233810607Z" level=info msg="connecting to shim f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:47:26.541023 kubelet[2514]: E0420 20:47:26.503784 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="3.2s" Apr 20 20:47:26.544920 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 20 20:47:26.588692 kubelet[2514]: I0420 20:47:26.588460 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:26.592892 kubelet[2514]: E0420 20:47:26.591991 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:26.668732 kubelet[2514]: E0420 20:47:26.667452 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:26.688577 kubelet[2514]: E0420 20:47:26.688388 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:26.697868 containerd[1648]: time="2026-04-20T20:47:26.697784395Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\"" Apr 20 20:47:26.740996 systemd[1]: Started cri-containerd-40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2.scope - libcontainer container 40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2. Apr 20 20:47:27.095282 containerd[1648]: time="2026-04-20T20:47:27.094492961Z" level=info msg="connecting to shim c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:47:27.104977 systemd[1]: Started cri-containerd-f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b.scope - libcontainer container f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b. Apr 20 20:47:27.941175 systemd[1]: Started cri-containerd-c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284.scope - libcontainer container c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284. Apr 20 20:47:28.120739 containerd[1648]: time="2026-04-20T20:47:28.119573490Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"ba15b63dde517d3f49c1db0a4abcdbe1\" namespace:\"kube-system\" returns sandbox id \"40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2\"" Apr 20 20:47:28.335019 kubelet[2514]: I0420 20:47:28.333955 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:28.355758 kubelet[2514]: E0420 20:47:28.337689 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:28.355758 kubelet[2514]: E0420 20:47:28.348204 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:28.355758 kubelet[2514]: E0420 20:47:28.353432 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:28.356604 containerd[1648]: time="2026-04-20T20:47:28.336055029Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\" returns sandbox id \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\"" Apr 20 20:47:28.412319 containerd[1648]: time="2026-04-20T20:47:28.411866140Z" level=info msg="CreateContainer within sandbox \"40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2\" for container name:\"kube-apiserver\"" Apr 20 20:47:28.456944 containerd[1648]: time="2026-04-20T20:47:28.456609525Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\"" Apr 20 20:47:28.588952 containerd[1648]: time="2026-04-20T20:47:28.585463256Z" level=info msg="Container 8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:47:28.621045 containerd[1648]: time="2026-04-20T20:47:28.605049954Z" level=info msg="Container a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:47:28.681578 containerd[1648]: time="2026-04-20T20:47:28.680314736Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\" returns sandbox id \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\"" Apr 20 20:47:28.682458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432538292.mount: Deactivated successfully. Apr 20 20:47:28.713386 containerd[1648]: time="2026-04-20T20:47:28.713043203Z" level=info msg="CreateContainer within sandbox \"40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2\" for name:\"kube-apiserver\" returns container id \"a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde\"" Apr 20 20:47:28.717312 containerd[1648]: time="2026-04-20T20:47:28.713695747Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" returns container id \"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\"" Apr 20 20:47:28.719677 kubelet[2514]: E0420 20:47:28.713326 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:28.721825 containerd[1648]: time="2026-04-20T20:47:28.721640096Z" level=info msg="StartContainer for \"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\"" Apr 20 20:47:28.724055 containerd[1648]: time="2026-04-20T20:47:28.722870907Z" level=info msg="StartContainer for \"a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde\"" Apr 20 20:47:28.725822 containerd[1648]: time="2026-04-20T20:47:28.725566297Z" level=info msg="connecting to shim 8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:47:28.735766 containerd[1648]: time="2026-04-20T20:47:28.735397824Z" level=info msg="connecting to shim a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde" address="unix:///run/containerd/s/e2b8f62b3513e9df927389710f869d409d09faef7e3b5d805b3779a2442f5e56" protocol=ttrpc version=3 Apr 20 20:47:28.752581 containerd[1648]: time="2026-04-20T20:47:28.752444712Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\"" Apr 20 20:47:28.877914 kubelet[2514]: E0420 20:47:28.871942 2514 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:47:29.021832 containerd[1648]: time="2026-04-20T20:47:29.007724005Z" level=info msg="Container 6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:47:29.118443 containerd[1648]: time="2026-04-20T20:47:29.118315711Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" returns container id \"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\"" Apr 20 20:47:29.124489 containerd[1648]: time="2026-04-20T20:47:29.123737524Z" level=info msg="StartContainer for \"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\"" Apr 20 20:47:29.140728 containerd[1648]: time="2026-04-20T20:47:29.139179840Z" level=info msg="connecting to shim 6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 20:47:29.289938 systemd[1]: Started cri-containerd-8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270.scope - libcontainer container 8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270. Apr 20 20:47:29.358744 systemd[1]: Started cri-containerd-a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde.scope - libcontainer container a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde. Apr 20 20:47:29.463347 systemd[1]: Started cri-containerd-6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d.scope - libcontainer container 6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d. Apr 20 20:47:29.781928 kubelet[2514]: E0420 20:47:29.780839 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="6.4s" Apr 20 20:47:30.295364 containerd[1648]: time="2026-04-20T20:47:30.294234766Z" level=info msg="StartContainer for \"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\" returns successfully" Apr 20 20:47:30.555787 containerd[1648]: time="2026-04-20T20:47:30.551253967Z" level=info msg="StartContainer for \"a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde\" returns successfully" Apr 20 20:47:31.226983 containerd[1648]: time="2026-04-20T20:47:31.226664904Z" level=info msg="StartContainer for \"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\" returns successfully" Apr 20 20:47:31.502061 kubelet[2514]: E0420 20:47:31.496564 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:31.506191 kubelet[2514]: E0420 20:47:31.505370 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:31.788896 kubelet[2514]: I0420 20:47:31.769727 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:31.942776 kubelet[2514]: E0420 20:47:31.942348 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:32.009893 kubelet[2514]: E0420 20:47:32.009595 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:32.021534 kubelet[2514]: E0420 20:47:32.020653 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:47:32.907607 kubelet[2514]: E0420 20:47:32.905972 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:32.907607 kubelet[2514]: E0420 20:47:32.914297 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:32.907607 kubelet[2514]: E0420 20:47:32.918324 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:32.907607 kubelet[2514]: E0420 20:47:32.918590 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:32.939924 kubelet[2514]: E0420 20:47:32.925337 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:32.939924 kubelet[2514]: E0420 20:47:32.925737 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:34.274286 kubelet[2514]: E0420 20:47:34.273510 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:34.289591 kubelet[2514]: E0420 20:47:34.288592 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:34.342888 kubelet[2514]: E0420 20:47:34.341970 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:34.342888 kubelet[2514]: E0420 20:47:34.336064 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:34.480783 kubelet[2514]: E0420 20:47:34.350303 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:34.480783 kubelet[2514]: E0420 20:47:34.404384 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:34.956425 kubelet[2514]: E0420 20:47:34.949041 2514 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:47:35.283780 kubelet[2514]: E0420 20:47:35.282782 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:35.283780 kubelet[2514]: E0420 20:47:35.289244 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:35.439561 kubelet[2514]: E0420 20:47:35.406378 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:35.439561 kubelet[2514]: E0420 20:47:35.425747 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:37.194612 kubelet[2514]: E0420 20:47:37.193561 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:37.203862 kubelet[2514]: E0420 20:47:37.197688 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:38.507849 kubelet[2514]: I0420 20:47:38.506669 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:47:42.738833 kubelet[2514]: E0420 20:47:42.731048 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a82ba5a126ea40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,LastTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:47:44.376327 kubelet[2514]: E0420 20:47:44.375732 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:44.376327 kubelet[2514]: E0420 20:47:44.376977 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:44.990525 kubelet[2514]: E0420 20:47:44.986693 2514 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:47:46.229936 kubelet[2514]: E0420 20:47:46.228050 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 20:47:47.667074 kubelet[2514]: E0420 20:47:47.664615 2514 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:47:48.529819 kubelet[2514]: E0420 20:47:48.528593 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 20:47:55.001984 kubelet[2514]: E0420 20:47:54.999705 2514 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:47:55.297926 kubelet[2514]: E0420 20:47:55.293487 2514 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:47:55.297926 kubelet[2514]: E0420 20:47:55.296032 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:47:55.606667 kubelet[2514]: I0420 20:47:55.561222 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:48:02.759353 kubelet[2514]: E0420 20:48:02.751963 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a82ba5a126ea40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,LastTimestamp:2026-04-20 20:47:23.370940992 +0000 UTC m=+3.924992878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:48:03.351448 kubelet[2514]: E0420 20:48:03.340512 2514 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:48:05.039201 kubelet[2514]: E0420 20:48:05.038332 2514 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:48:05.666707 kubelet[2514]: E0420 20:48:05.665110 2514 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 20:48:06.673868 kubelet[2514]: I0420 20:48:06.673373 2514 apiserver.go:52] "Watching apiserver" Apr 20 20:48:07.562980 kubelet[2514]: I0420 20:48:07.561939 2514 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 20:48:08.089091 kubelet[2514]: E0420 20:48:08.075203 2514 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 20:48:09.194812 kubelet[2514]: E0420 20:48:09.193697 2514 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 20:48:10.494013 kubelet[2514]: E0420 20:48:10.489110 2514 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 20:48:10.804009 kubelet[2514]: E0420 20:48:10.799345 2514 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 20:48:12.718218 kubelet[2514]: I0420 20:48:12.716625 2514 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:48:12.865830 kubelet[2514]: I0420 20:48:12.863778 2514 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 20:48:12.976006 kubelet[2514]: I0420 20:48:12.970595 2514 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:48:13.552621 kubelet[2514]: I0420 20:48:13.551076 2514 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 20:48:13.987633 kubelet[2514]: E0420 20:48:13.976337 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:48:14.091767 kubelet[2514]: I0420 20:48:14.091520 2514 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 20:48:14.136830 kubelet[2514]: E0420 20:48:14.136382 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:48:14.368545 kubelet[2514]: E0420 20:48:14.327988 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:48:15.376694 kubelet[2514]: I0420 20:48:15.374338 2514 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.341100734 podStartE2EDuration="2.341100734s" podCreationTimestamp="2026-04-20 20:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:48:15.322317147 +0000 UTC m=+55.876369031" watchObservedRunningTime="2026-04-20 20:48:15.341100734 +0000 UTC m=+55.895152610" Apr 20 20:48:24.581574 kubelet[2514]: I0420 20:48:24.580610 2514 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.579960304 podStartE2EDuration="10.579960304s" podCreationTimestamp="2026-04-20 20:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:48:24.489381219 +0000 UTC m=+65.043433110" watchObservedRunningTime="2026-04-20 20:48:24.579960304 +0000 UTC m=+65.134012211" Apr 20 20:48:26.620900 kubelet[2514]: I0420 20:48:26.617440 2514 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=13.617301477 podStartE2EDuration="13.617301477s" podCreationTimestamp="2026-04-20 20:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:48:26.61689389 +0000 UTC m=+67.170945777" watchObservedRunningTime="2026-04-20 20:48:26.617301477 +0000 UTC m=+67.171353369" Apr 20 20:48:28.858858 kubelet[2514]: E0420 20:48:28.857973 2514 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 20 20:48:56.884735 kubelet[2514]: E0420 20:48:56.883745 2514 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.084s" Apr 20 20:49:09.662893 systemd[1]: cri-containerd-8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270.scope: Deactivated successfully. Apr 20 20:49:09.788519 systemd[1]: cri-containerd-8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270.scope: Consumed 8.728s CPU time, 22.2M memory peak. Apr 20 20:49:10.037767 containerd[1648]: time="2026-04-20T20:49:09.993117641Z" level=info msg="received container exit event container_id:\"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\" id:\"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\" pid:2735 exit_status:1 exited_at:{seconds:1776718149 nanos:742621710}" Apr 20 20:49:11.939961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270-rootfs.mount: Deactivated successfully. Apr 20 20:49:13.651721 kubelet[2514]: I0420 20:49:13.649970 2514 scope.go:122] "RemoveContainer" containerID="8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270" Apr 20 20:49:13.856170 kubelet[2514]: E0420 20:49:13.775796 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:14.503985 containerd[1648]: time="2026-04-20T20:49:14.501527164Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 20:49:15.362892 containerd[1648]: time="2026-04-20T20:49:15.361334123Z" level=info msg="Container 38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:49:15.805779 containerd[1648]: time="2026-04-20T20:49:15.744789461Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:1 returns container id \"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\"" Apr 20 20:49:15.942654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841362814.mount: Deactivated successfully. Apr 20 20:49:16.159975 containerd[1648]: time="2026-04-20T20:49:16.147043064Z" level=info msg="StartContainer for \"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\"" Apr 20 20:49:16.241685 containerd[1648]: time="2026-04-20T20:49:16.240704351Z" level=info msg="connecting to shim 38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:49:17.349043 kubelet[2514]: E0420 20:49:17.344371 2514 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.391s" Apr 20 20:49:17.349043 kubelet[2514]: E0420 20:49:17.345069 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:17.362613 systemd[1]: Started cri-containerd-38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b.scope - libcontainer container 38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b. Apr 20 20:49:19.098754 containerd[1648]: time="2026-04-20T20:49:19.092049721Z" level=info msg="StartContainer for \"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\" returns successfully" Apr 20 20:49:19.627787 kubelet[2514]: E0420 20:49:19.624910 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:20.815716 kubelet[2514]: E0420 20:49:20.811211 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:23.587473 kubelet[2514]: E0420 20:49:23.586881 2514 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 20 20:49:24.370456 kubelet[2514]: E0420 20:49:24.368111 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:28.142009 kubelet[2514]: E0420 20:49:28.107816 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:33.291876 kubelet[2514]: E0420 20:49:33.268977 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:34.305071 kubelet[2514]: E0420 20:49:34.304587 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:38.483886 kubelet[2514]: E0420 20:49:38.454053 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:38.922052 kubelet[2514]: E0420 20:49:38.899089 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:41.239292 kubelet[2514]: E0420 20:49:41.237733 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:43.651844 kubelet[2514]: E0420 20:49:43.650983 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:46.541978 kubelet[2514]: E0420 20:49:46.538861 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:48.445226 kubelet[2514]: E0420 20:49:48.439028 2514 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:49:48.806879 kubelet[2514]: E0420 20:49:48.805998 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:53.961898 kubelet[2514]: E0420 20:49:53.960273 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:49:56.406964 systemd[1]: Reload requested from client PID 2863 ('systemctl') (unit session-6.scope)... Apr 20 20:49:56.407100 systemd[1]: Reloading... Apr 20 20:49:58.852740 kubelet[2514]: E0420 20:49:58.806018 2514 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.008s" Apr 20 20:49:59.469024 kubelet[2514]: E0420 20:49:59.373932 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:50:02.108113 kubelet[2514]: E0420 20:50:02.107031 2514 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.239s" Apr 20 20:50:03.480822 zram_generator::config[2916]: No configuration found. Apr 20 20:50:03.546057 systemd-ssh-generator[2909]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 20:50:03.582807 (sd-exec-[2894]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 20:50:04.566498 kubelet[2514]: E0420 20:50:04.563949 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:50:09.786715 kubelet[2514]: E0420 20:50:09.750360 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:50:13.759440 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 20:50:14.827508 kubelet[2514]: E0420 20:50:14.827055 2514 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:50:18.816968 systemd[1]: Reloading finished in 22367 ms. Apr 20 20:50:19.739439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:50:19.745513 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 20:50:19.746718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:50:19.746870 systemd[1]: kubelet.service: Consumed 1min 27.331s CPU time, 138M memory peak. Apr 20 20:50:19.895420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:50:21.777349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:50:21.915103 (kubelet)[2962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:50:22.533942 kubelet[2962]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 20:50:22.839255 kubelet[2962]: I0420 20:50:22.835888 2962 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 20:50:22.839255 kubelet[2962]: I0420 20:50:22.838910 2962 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 20:50:22.839255 kubelet[2962]: I0420 20:50:22.839058 2962 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 20:50:22.839255 kubelet[2962]: I0420 20:50:22.839067 2962 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 20:50:22.847863 kubelet[2962]: I0420 20:50:22.845013 2962 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 20:50:22.932569 kubelet[2962]: I0420 20:50:22.931274 2962 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 20:50:22.954910 kubelet[2962]: I0420 20:50:22.951539 2962 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 20:50:23.172938 kubelet[2962]: I0420 20:50:23.170425 2962 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 20:50:23.259431 kubelet[2962]: I0420 20:50:23.259215 2962 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 20:50:23.339917 kubelet[2962]: I0420 20:50:23.261017 2962 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 20:50:23.339917 kubelet[2962]: I0420 20:50:23.284809 2962 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 20:50:23.339917 kubelet[2962]: I0420 20:50:23.304922 2962 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 20:50:23.339917 kubelet[2962]: I0420 20:50:23.308459 2962 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.336512 2962 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.352781 2962 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.362646 2962 kubelet.go:482] "Attempting to sync node with API server" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.362846 2962 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.363004 2962 kubelet.go:394] "Adding apiserver pod source" Apr 20 20:50:23.373807 kubelet[2962]: I0420 20:50:23.363021 2962 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 20:50:23.699764 kubelet[2962]: I0420 20:50:23.696656 2962 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 20:50:23.732578 kubelet[2962]: I0420 20:50:23.731669 2962 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 20:50:23.732578 kubelet[2962]: I0420 20:50:23.731953 2962 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 20:50:24.043016 kubelet[2962]: I0420 20:50:24.042281 2962 server.go:1257] "Started kubelet" Apr 20 20:50:24.049964 kubelet[2962]: I0420 20:50:24.047919 2962 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 20:50:24.049964 kubelet[2962]: I0420 20:50:24.048651 2962 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 20:50:24.054461 kubelet[2962]: I0420 20:50:24.048307 2962 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 20:50:24.169273 kubelet[2962]: I0420 20:50:24.168099 2962 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 20:50:24.174798 kubelet[2962]: I0420 20:50:24.174358 2962 server.go:317] "Adding debug handlers to kubelet server" Apr 20 20:50:24.175100 kubelet[2962]: I0420 20:50:24.174938 2962 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 20:50:24.175603 kubelet[2962]: I0420 20:50:24.175565 2962 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 20:50:24.183652 kubelet[2962]: I0420 20:50:24.180061 2962 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 20:50:24.193814 kubelet[2962]: I0420 20:50:24.188498 2962 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 20:50:24.194176 kubelet[2962]: I0420 20:50:24.194065 2962 reconciler.go:29] "Reconciler: start to sync state" Apr 20 20:50:24.217660 kubelet[2962]: I0420 20:50:24.215124 2962 factory.go:223] Registration of the systemd container factory successfully Apr 20 20:50:24.226542 kubelet[2962]: I0420 20:50:24.219239 2962 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 20:50:24.390387 kubelet[2962]: I0420 20:50:24.367648 2962 factory.go:223] Registration of the containerd container factory successfully Apr 20 20:50:24.429077 kubelet[2962]: I0420 20:50:24.428744 2962 apiserver.go:52] "Watching apiserver" Apr 20 20:50:24.913526 kubelet[2962]: E0420 20:50:24.895460 2962 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 20:50:25.115124 kubelet[2962]: I0420 20:50:24.969068 2962 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 20:50:25.277360 kubelet[2962]: I0420 20:50:25.265797 2962 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 20:50:25.277360 kubelet[2962]: I0420 20:50:25.268902 2962 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 20:50:25.277360 kubelet[2962]: I0420 20:50:25.271979 2962 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 20:50:25.277360 kubelet[2962]: E0420 20:50:25.275719 2962 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 20:50:25.397805 kubelet[2962]: E0420 20:50:25.394574 2962 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:50:25.662986 kubelet[2962]: E0420 20:50:25.644638 2962 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:50:26.055482 kubelet[2962]: E0420 20:50:26.054453 2962 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:50:26.860046 kubelet[2962]: E0420 20:50:26.859439 2962 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039314 2962 cpu_manager.go:225] "Starting" policy="none" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039406 2962 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039489 2962 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039888 2962 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039900 2962 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039915 2962 policy_none.go:50] "Start" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039953 2962 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.039961 2962 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.040098 2962 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 20:50:27.048296 kubelet[2962]: I0420 20:50:27.040103 2962 policy_none.go:44] "Start" Apr 20 20:50:27.422771 kubelet[2962]: E0420 20:50:27.417975 2962 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 20:50:27.445566 kubelet[2962]: I0420 20:50:27.445495 2962 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 20:50:27.470617 kubelet[2962]: I0420 20:50:27.457627 2962 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 20:50:27.563755 kubelet[2962]: I0420 20:50:27.545055 2962 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 20:50:27.570870 kubelet[2962]: E0420 20:50:27.569240 2962 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 20:50:28.138374 kubelet[2962]: I0420 20:50:28.136555 2962 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:50:28.508013 kubelet[2962]: I0420 20:50:28.496093 2962 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 20:50:28.582952 kubelet[2962]: I0420 20:50:28.564027 2962 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:28.621881 kubelet[2962]: I0420 20:50:28.621784 2962 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.645828 2962 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.646283 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.646316 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.646366 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.646394 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:50:28.646162 kubelet[2962]: I0420 20:50:28.646417 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:50:28.660892 kubelet[2962]: I0420 20:50:28.646433 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:50:28.660892 kubelet[2962]: I0420 20:50:28.646446 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:28.660892 kubelet[2962]: I0420 20:50:28.646478 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:28.660892 kubelet[2962]: I0420 20:50:28.646493 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:29.848043 kubelet[2962]: I0420 20:50:29.844619 2962 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 20 20:50:29.949374 kubelet[2962]: I0420 20:50:29.861852 2962 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 20:50:30.697200 kubelet[2962]: E0420 20:50:30.696517 2962 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 20 20:50:30.725940 kubelet[2962]: E0420 20:50:30.725468 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:30.787936 kubelet[2962]: E0420 20:50:30.787065 2962 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:50:30.876703 kubelet[2962]: E0420 20:50:30.875707 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:31.648904 kubelet[2962]: E0420 20:50:31.647045 2962 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 20 20:50:31.661368 kubelet[2962]: E0420 20:50:31.660932 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:31.994723 kubelet[2962]: E0420 20:50:31.977955 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:31.994723 kubelet[2962]: E0420 20:50:31.977964 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:31.994723 kubelet[2962]: E0420 20:50:31.984024 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:33.005059 kubelet[2962]: E0420 20:50:33.004762 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:33.057512 kubelet[2962]: E0420 20:50:33.041828 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:50:50.548228 kubelet[2962]: I0420 20:50:50.547532 2962 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 20:50:50.661472 kubelet[2962]: I0420 20:50:50.649154 2962 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 20:50:50.666445 containerd[1648]: time="2026-04-20T20:50:50.548657057Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 20:51:05.122416 kubelet[2962]: I0420 20:51:05.122037 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy\") pod \"kube-proxy-xfq2g\" (UID: \"d5d00a8f-b49b-4e5a-a245-e6be69069c50\") " pod="kube-system/kube-proxy-xfq2g" Apr 20 20:51:05.151075 kubelet[2962]: I0420 20:51:05.123512 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d00a8f-b49b-4e5a-a245-e6be69069c50-xtables-lock\") pod \"kube-proxy-xfq2g\" (UID: \"d5d00a8f-b49b-4e5a-a245-e6be69069c50\") " pod="kube-system/kube-proxy-xfq2g" Apr 20 20:51:05.151075 kubelet[2962]: I0420 20:51:05.123612 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d00a8f-b49b-4e5a-a245-e6be69069c50-lib-modules\") pod \"kube-proxy-xfq2g\" (UID: \"d5d00a8f-b49b-4e5a-a245-e6be69069c50\") " pod="kube-system/kube-proxy-xfq2g" Apr 20 20:51:05.151075 kubelet[2962]: I0420 20:51:05.123632 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwg9\" (UniqueName: \"kubernetes.io/projected/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-api-access-gfwg9\") pod \"kube-proxy-xfq2g\" (UID: \"d5d00a8f-b49b-4e5a-a245-e6be69069c50\") " pod="kube-system/kube-proxy-xfq2g" Apr 20 20:51:06.028123 systemd[1]: Created slice kubepods-besteffort-podd5d00a8f_b49b_4e5a_a245_e6be69069c50.slice - libcontainer container kubepods-besteffort-podd5d00a8f_b49b_4e5a_a245_e6be69069c50.slice. Apr 20 20:51:06.476029 kubelet[2962]: E0420 20:51:06.401076 2962 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:06.476029 kubelet[2962]: E0420 20:51:06.464927 2962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy podName:d5d00a8f-b49b-4e5a-a245-e6be69069c50 nodeName:}" failed. No retries permitted until 2026-04-20 20:51:06.96484772 +0000 UTC m=+45.037567842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy") pod "kube-proxy-xfq2g" (UID: "d5d00a8f-b49b-4e5a-a245-e6be69069c50") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:08.178900 kubelet[2962]: E0420 20:51:08.171340 2962 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:08.408072 kubelet[2962]: E0420 20:51:08.219737 2962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy podName:d5d00a8f-b49b-4e5a-a245-e6be69069c50 nodeName:}" failed. No retries permitted until 2026-04-20 20:51:09.219579442 +0000 UTC m=+47.292299549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy") pod "kube-proxy-xfq2g" (UID: "d5d00a8f-b49b-4e5a-a245-e6be69069c50") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:10.395611 kubelet[2962]: E0420 20:51:10.388064 2962 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:10.483081 kubelet[2962]: E0420 20:51:10.464807 2962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy podName:d5d00a8f-b49b-4e5a-a245-e6be69069c50 nodeName:}" failed. No retries permitted until 2026-04-20 20:51:12.448699497 +0000 UTC m=+50.521419611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy") pod "kube-proxy-xfq2g" (UID: "d5d00a8f-b49b-4e5a-a245-e6be69069c50") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:13.216491 containerd[1648]: time="2026-04-20T20:51:13.215041155Z" level=info msg="received container exit event container_id:\"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\" id:\"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\" pid:2838 exit_status:1 exited_at:{seconds:1776718273 nanos:202950188}" Apr 20 20:51:13.217649 systemd[1]: cri-containerd-38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b.scope: Deactivated successfully. Apr 20 20:51:13.218626 systemd[1]: cri-containerd-38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b.scope: Consumed 37.466s CPU time, 45.8M memory peak. Apr 20 20:51:13.680037 kubelet[2962]: E0420 20:51:13.677262 2962 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:13.777013 kubelet[2962]: E0420 20:51:13.695899 2962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy podName:d5d00a8f-b49b-4e5a-a245-e6be69069c50 nodeName:}" failed. No retries permitted until 2026-04-20 20:51:17.679963469 +0000 UTC m=+55.752683586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d5d00a8f-b49b-4e5a-a245-e6be69069c50-kube-proxy") pod "kube-proxy-xfq2g" (UID: "d5d00a8f-b49b-4e5a-a245-e6be69069c50") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:51:14.879048 systemd[1767]: Created slice background.slice - User Background Tasks Slice. Apr 20 20:51:15.003978 systemd[1767]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 20:51:15.595450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b-rootfs.mount: Deactivated successfully. Apr 20 20:51:15.699542 systemd[1767]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 20:51:16.614733 kubelet[2962]: E0420 20:51:16.611022 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.292s" Apr 20 20:51:16.963945 sudo[1818]: pam_unix(sudo:session): session closed for user root Apr 20 20:51:17.288626 sshd[1817]: Connection closed by 10.0.0.1 port 56308 Apr 20 20:51:17.091074 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Apr 20 20:51:17.295247 systemd[1]: sshd@4-8193-10.0.0.6:22-10.0.0.1:56308.service: Deactivated successfully. Apr 20 20:51:17.477735 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 20:51:17.522830 systemd[1]: session-6.scope: Consumed 24.022s CPU time, 221.3M memory peak. Apr 20 20:51:17.555624 systemd-logind[1620]: Session 6 logged out. Waiting for processes to exit. Apr 20 20:51:17.650792 systemd[1]: cri-containerd-6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d.scope: Deactivated successfully. Apr 20 20:51:17.697032 systemd[1]: cri-containerd-6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d.scope: Consumed 52.861s CPU time, 23.2M memory peak. Apr 20 20:51:17.762993 systemd-logind[1620]: Removed session 6. Apr 20 20:51:17.883335 kubelet[2962]: I0420 20:51:17.776824 2962 scope.go:122] "RemoveContainer" containerID="8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270" Apr 20 20:51:17.958884 containerd[1648]: time="2026-04-20T20:51:17.951995869Z" level=info msg="received container exit event container_id:\"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\" id:\"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\" pid:2751 exit_status:1 exited_at:{seconds:1776718277 nanos:694947085}" Apr 20 20:51:18.120712 kubelet[2962]: I0420 20:51:18.113739 2962 scope.go:122] "RemoveContainer" containerID="38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b" Apr 20 20:51:18.120712 kubelet[2962]: E0420 20:51:18.113968 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:18.149581 kubelet[2962]: E0420 20:51:18.148581 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:18.159787 containerd[1648]: time="2026-04-20T20:51:18.159570030Z" level=info msg="RemoveContainer for \"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\"" Apr 20 20:51:18.175479 containerd[1648]: time="2026-04-20T20:51:18.172486156Z" level=info msg="RunPodSandbox for name:\"kube-proxy-xfq2g\" uid:\"d5d00a8f-b49b-4e5a-a245-e6be69069c50\" namespace:\"kube-system\"" Apr 20 20:51:18.327899 containerd[1648]: time="2026-04-20T20:51:18.327518598Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 20:51:18.494099 containerd[1648]: time="2026-04-20T20:51:18.478221024Z" level=info msg="RemoveContainer for \"8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270\" returns successfully" Apr 20 20:51:18.992214 containerd[1648]: time="2026-04-20T20:51:18.981902413Z" level=info msg="Container 7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:51:19.074324 kubelet[2962]: E0420 20:51:19.071925 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:19.079064 containerd[1648]: time="2026-04-20T20:51:19.075776637Z" level=info msg="connecting to shim 4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a" address="unix:///run/containerd/s/d55266efaff317e418a919a202e1025c16794e38c13b8a48c021f0f70c48cd3c" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:51:19.539278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d-rootfs.mount: Deactivated successfully. Apr 20 20:51:20.026657 containerd[1648]: time="2026-04-20T20:51:20.026278950Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:2 returns container id \"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\"" Apr 20 20:51:20.156377 containerd[1648]: time="2026-04-20T20:51:20.155473652Z" level=info msg="StartContainer for \"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\"" Apr 20 20:51:20.185090 containerd[1648]: time="2026-04-20T20:51:20.184336193Z" level=info msg="connecting to shim 7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:51:20.760402 systemd[1]: Started cri-containerd-4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a.scope - libcontainer container 4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a. Apr 20 20:51:21.194473 systemd[1]: Started cri-containerd-7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67.scope - libcontainer container 7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67. Apr 20 20:51:21.785012 kubelet[2962]: I0420 20:51:21.784063 2962 scope.go:122] "RemoveContainer" containerID="6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d" Apr 20 20:51:22.000063 kubelet[2962]: E0420 20:51:21.787061 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:22.031650 containerd[1648]: time="2026-04-20T20:51:22.000484890Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\" attempt:1" Apr 20 20:51:22.791106 containerd[1648]: time="2026-04-20T20:51:22.787636867Z" level=info msg="Container 21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:51:23.216122 containerd[1648]: time="2026-04-20T20:51:23.179493410Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" attempt:1 returns container id \"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\"" Apr 20 20:51:23.231214 containerd[1648]: time="2026-04-20T20:51:23.231116244Z" level=error msg="get state for 4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a" error="context deadline exceeded" Apr 20 20:51:23.231214 containerd[1648]: time="2026-04-20T20:51:23.231210079Z" level=warning msg="unknown status" status=0 Apr 20 20:51:23.234111 containerd[1648]: time="2026-04-20T20:51:23.232706859Z" level=info msg="StartContainer for \"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\"" Apr 20 20:51:23.257620 containerd[1648]: time="2026-04-20T20:51:23.257343747Z" level=info msg="connecting to shim 21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 20:51:23.766076 containerd[1648]: time="2026-04-20T20:51:23.763042059Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:51:24.474493 systemd[1]: Started cri-containerd-21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5.scope - libcontainer container 21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5. Apr 20 20:51:24.534483 kubelet[2962]: I0420 20:51:24.533307 2962 scope.go:122] "RemoveContainer" containerID="38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b" Apr 20 20:51:24.978056 containerd[1648]: time="2026-04-20T20:51:24.956783247Z" level=info msg="RemoveContainer for \"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\"" Apr 20 20:51:25.030993 containerd[1648]: time="2026-04-20T20:51:25.028598475Z" level=info msg="StartContainer for \"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\" returns successfully" Apr 20 20:51:25.054478 containerd[1648]: time="2026-04-20T20:51:25.054107326Z" level=info msg="RunPodSandbox for name:\"kube-proxy-xfq2g\" uid:\"d5d00a8f-b49b-4e5a-a245-e6be69069c50\" namespace:\"kube-system\" returns sandbox id \"4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a\"" Apr 20 20:51:25.146028 containerd[1648]: time="2026-04-20T20:51:25.143970294Z" level=info msg="RemoveContainer for \"38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b\" returns successfully" Apr 20 20:51:25.313844 kubelet[2962]: I0420 20:51:25.151239 2962 scope.go:122] "RemoveContainer" containerID="6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d" Apr 20 20:51:25.313844 kubelet[2962]: E0420 20:51:25.183941 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:25.677823 containerd[1648]: time="2026-04-20T20:51:25.592931388Z" level=info msg="RemoveContainer for \"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\"" Apr 20 20:51:26.012091 containerd[1648]: time="2026-04-20T20:51:26.008994803Z" level=info msg="CreateContainer within sandbox \"4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a\" for container name:\"kube-proxy\"" Apr 20 20:51:26.926036 containerd[1648]: time="2026-04-20T20:51:26.874128123Z" level=info msg="Container c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:51:26.926036 containerd[1648]: time="2026-04-20T20:51:26.918017783Z" level=error msg="get state for 21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5" error="context deadline exceeded" Apr 20 20:51:26.926036 containerd[1648]: time="2026-04-20T20:51:26.930426837Z" level=warning msg="unknown status" status=0 Apr 20 20:51:27.186180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130890748.mount: Deactivated successfully. Apr 20 20:51:27.504512 containerd[1648]: time="2026-04-20T20:51:27.504080643Z" level=info msg="CreateContainer within sandbox \"4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a\" for name:\"kube-proxy\" returns container id \"c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17\"" Apr 20 20:51:27.521987 kubelet[2962]: E0420 20:51:27.520378 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:27.593912 containerd[1648]: time="2026-04-20T20:51:27.556091878Z" level=info msg="StartContainer for \"c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17\"" Apr 20 20:51:27.696819 containerd[1648]: time="2026-04-20T20:51:27.694382532Z" level=info msg="connecting to shim c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17" address="unix:///run/containerd/s/d55266efaff317e418a919a202e1025c16794e38c13b8a48c021f0f70c48cd3c" protocol=ttrpc version=3 Apr 20 20:51:27.783417 containerd[1648]: time="2026-04-20T20:51:27.754466921Z" level=error msg="get state for c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284" error="context deadline exceeded" Apr 20 20:51:27.783417 containerd[1648]: time="2026-04-20T20:51:27.755539698Z" level=warning msg="unknown status" status=0 Apr 20 20:51:27.955900 containerd[1648]: time="2026-04-20T20:51:27.954902291Z" level=info msg="RemoveContainer for \"6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d\" returns successfully" Apr 20 20:51:29.009608 containerd[1648]: time="2026-04-20T20:51:29.007394635Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 20 20:51:29.053188 containerd[1648]: time="2026-04-20T20:51:29.039799645Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:51:30.479829 systemd[1]: Started cri-containerd-c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17.scope - libcontainer container c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17. Apr 20 20:51:31.685709 containerd[1648]: time="2026-04-20T20:51:31.685001320Z" level=info msg="StartContainer for \"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\" returns successfully" Apr 20 20:51:31.926793 kubelet[2962]: E0420 20:51:31.896650 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:33.157106 kubelet[2962]: E0420 20:51:33.155489 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:33.219869 kubelet[2962]: E0420 20:51:33.155506 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:33.278105 containerd[1648]: time="2026-04-20T20:51:33.277280532Z" level=error msg="get state for c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17" error="context deadline exceeded" Apr 20 20:51:33.461798 containerd[1648]: time="2026-04-20T20:51:33.294970107Z" level=warning msg="unknown status" status=0 Apr 20 20:51:34.350423 kubelet[2962]: E0420 20:51:34.349220 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:35.562238 containerd[1648]: time="2026-04-20T20:51:35.557353223Z" level=error msg="get state for c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17" error="context deadline exceeded" Apr 20 20:51:35.562238 containerd[1648]: time="2026-04-20T20:51:35.560427695Z" level=warning msg="unknown status" status=0 Apr 20 20:51:35.730038 kubelet[2962]: E0420 20:51:35.725095 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:36.330506 containerd[1648]: time="2026-04-20T20:51:36.326117945Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:51:36.330506 containerd[1648]: time="2026-04-20T20:51:36.329976904Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 20:51:38.524931 containerd[1648]: time="2026-04-20T20:51:38.523993237Z" level=info msg="StartContainer for \"c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17\" returns successfully" Apr 20 20:51:40.078417 kubelet[2962]: E0420 20:51:40.077769 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:41.345004 kubelet[2962]: E0420 20:51:41.340905 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:41.992011 kubelet[2962]: E0420 20:51:41.991188 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:43.598201 kubelet[2962]: E0420 20:51:43.590000 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:46.606656 kubelet[2962]: I0420 20:51:46.605958 2962 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xfq2g" podStartSLOduration=50.605898625 podStartE2EDuration="50.605898625s" podCreationTimestamp="2026-04-20 20:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:51:42.938411738 +0000 UTC m=+81.011131870" watchObservedRunningTime="2026-04-20 20:51:46.605898625 +0000 UTC m=+84.678618751" Apr 20 20:51:49.757215 kubelet[2962]: E0420 20:51:49.756756 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:50.586956 kubelet[2962]: E0420 20:51:50.586162 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.035s" Apr 20 20:51:52.649052 kubelet[2962]: E0420 20:51:52.647243 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:51:54.771837 kubelet[2962]: E0420 20:51:54.768510 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:14.862901 systemd[1]: cri-containerd-7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67.scope: Deactivated successfully. Apr 20 20:52:14.965019 systemd[1]: cri-containerd-7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67.scope: Consumed 13.302s CPU time, 19.3M memory peak. Apr 20 20:52:15.596302 containerd[1648]: time="2026-04-20T20:52:15.048586588Z" level=info msg="received container exit event container_id:\"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\" id:\"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\" pid:3131 exit_status:1 exited_at:{seconds:1776718334 nanos:881294060}" Apr 20 20:52:16.375323 kubelet[2962]: E0420 20:52:16.373875 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.085s" Apr 20 20:52:18.731285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67-rootfs.mount: Deactivated successfully. Apr 20 20:52:20.630402 kubelet[2962]: I0420 20:52:20.627847 2962 scope.go:122] "RemoveContainer" containerID="7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67" Apr 20 20:52:20.815766 kubelet[2962]: E0420 20:52:20.645869 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:20.815766 kubelet[2962]: E0420 20:52:20.657959 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:52:21.845613 kubelet[2962]: I0420 20:52:21.840085 2962 scope.go:122] "RemoveContainer" containerID="7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67" Apr 20 20:52:22.073963 kubelet[2962]: E0420 20:52:21.877580 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:22.073963 kubelet[2962]: E0420 20:52:21.949608 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:52:24.305975 kubelet[2962]: E0420 20:52:24.304026 2962 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 20 20:52:25.901492 systemd[1]: cri-containerd-21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5.scope: Deactivated successfully. Apr 20 20:52:26.053612 systemd[1]: cri-containerd-21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5.scope: Consumed 19.415s CPU time, 19.7M memory peak. Apr 20 20:52:26.137509 containerd[1648]: time="2026-04-20T20:52:25.906510530Z" level=info msg="received container exit event container_id:\"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\" id:\"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\" pid:3168 exit_status:1 exited_at:{seconds:1776718345 nanos:900694601}" Apr 20 20:52:26.454710 kubelet[2962]: I0420 20:52:26.408595 2962 scope.go:122] "RemoveContainer" containerID="7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67" Apr 20 20:52:26.775014 kubelet[2962]: E0420 20:52:26.591964 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:26.775014 kubelet[2962]: E0420 20:52:26.666575 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:27.667539 containerd[1648]: time="2026-04-20T20:52:27.659958332Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 20:52:28.284008 containerd[1648]: time="2026-04-20T20:52:28.250626282Z" level=info msg="container event discarded" container=40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2 type=CONTAINER_CREATED_EVENT Apr 20 20:52:28.456065 containerd[1648]: time="2026-04-20T20:52:28.424069466Z" level=info msg="container event discarded" container=40510c1e3f5b4b7ae6a53e0fe605f9a7fb7f5c7b6e6d4d098b77631915d968c2 type=CONTAINER_STARTED_EVENT Apr 20 20:52:28.456065 containerd[1648]: time="2026-04-20T20:52:28.427230452Z" level=info msg="container event discarded" container=f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b type=CONTAINER_CREATED_EVENT Apr 20 20:52:28.456065 containerd[1648]: time="2026-04-20T20:52:28.428064754Z" level=info msg="container event discarded" container=f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b type=CONTAINER_STARTED_EVENT Apr 20 20:52:28.762554 containerd[1648]: time="2026-04-20T20:52:28.690656870Z" level=info msg="container event discarded" container=c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284 type=CONTAINER_CREATED_EVENT Apr 20 20:52:28.835761 containerd[1648]: time="2026-04-20T20:52:28.805537586Z" level=info msg="container event discarded" container=c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284 type=CONTAINER_STARTED_EVENT Apr 20 20:52:29.126375 containerd[1648]: time="2026-04-20T20:52:28.972122713Z" level=info msg="container event discarded" container=8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270 type=CONTAINER_CREATED_EVENT Apr 20 20:52:29.126375 containerd[1648]: time="2026-04-20T20:52:28.984385763Z" level=info msg="container event discarded" container=a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde type=CONTAINER_CREATED_EVENT Apr 20 20:52:29.126375 containerd[1648]: time="2026-04-20T20:52:29.116116091Z" level=info msg="container event discarded" container=6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d type=CONTAINER_CREATED_EVENT Apr 20 20:52:29.381534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539444306.mount: Deactivated successfully. Apr 20 20:52:29.493906 containerd[1648]: time="2026-04-20T20:52:29.487556121Z" level=info msg="Container 8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:52:29.538274 kubelet[2962]: E0420 20:52:29.531867 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.193s" Apr 20 20:52:29.878103 containerd[1648]: time="2026-04-20T20:52:29.863526960Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:3 returns container id \"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\"" Apr 20 20:52:29.974988 containerd[1648]: time="2026-04-20T20:52:29.973614582Z" level=info msg="StartContainer for \"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\"" Apr 20 20:52:30.328762 containerd[1648]: time="2026-04-20T20:52:30.287707169Z" level=info msg="container event discarded" container=8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270 type=CONTAINER_STARTED_EVENT Apr 20 20:52:30.464824 containerd[1648]: time="2026-04-20T20:52:30.442125852Z" level=info msg="connecting to shim 8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:52:30.544481 containerd[1648]: time="2026-04-20T20:52:30.543635412Z" level=info msg="container event discarded" container=a00dc0918cdfdd85beb9881c8b6769c6d61120fefe575b171326ddc8e2639cde type=CONTAINER_STARTED_EVENT Apr 20 20:52:30.855878 kubelet[2962]: E0420 20:52:30.854626 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:31.232553 containerd[1648]: time="2026-04-20T20:52:31.192656505Z" level=info msg="container event discarded" container=6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d type=CONTAINER_STARTED_EVENT Apr 20 20:52:32.007333 kubelet[2962]: E0420 20:52:31.996857 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:32.356024 kubelet[2962]: E0420 20:52:32.341472 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.04s" Apr 20 20:52:33.087082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5-rootfs.mount: Deactivated successfully. Apr 20 20:52:35.073105 kubelet[2962]: I0420 20:52:35.069802 2962 scope.go:122] "RemoveContainer" containerID="21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5" Apr 20 20:52:35.073105 kubelet[2962]: E0420 20:52:35.070648 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:35.269123 kubelet[2962]: E0420 20:52:35.092623 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:52:35.226493 systemd[1]: Started cri-containerd-8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8.scope - libcontainer container 8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8. Apr 20 20:52:36.319062 kubelet[2962]: I0420 20:52:36.317871 2962 scope.go:122] "RemoveContainer" containerID="21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5" Apr 20 20:52:36.487477 kubelet[2962]: E0420 20:52:36.343482 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:36.625346 containerd[1648]: time="2026-04-20T20:52:36.521275124Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\" attempt:2" Apr 20 20:52:36.856336 containerd[1648]: time="2026-04-20T20:52:36.853415394Z" level=info msg="Container c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:52:37.327635 kubelet[2962]: E0420 20:52:37.298955 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:37.684568 containerd[1648]: time="2026-04-20T20:52:37.652506044Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" attempt:2 returns container id \"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\"" Apr 20 20:52:37.841477 containerd[1648]: time="2026-04-20T20:52:37.800407890Z" level=info msg="StartContainer for \"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\"" Apr 20 20:52:37.919923 containerd[1648]: time="2026-04-20T20:52:37.917544124Z" level=info msg="connecting to shim c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 20:52:38.255346 containerd[1648]: time="2026-04-20T20:52:38.205932724Z" level=error msg="get state for 8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" error="context deadline exceeded" Apr 20 20:52:38.367691 containerd[1648]: time="2026-04-20T20:52:38.255045757Z" level=warning msg="unknown status" status=0 Apr 20 20:52:38.833960 kubelet[2962]: E0420 20:52:38.822021 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.515s" Apr 20 20:52:39.561818 containerd[1648]: time="2026-04-20T20:52:39.555598373Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:52:41.364591 kubelet[2962]: E0420 20:52:41.364047 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.09s" Apr 20 20:52:42.827974 kubelet[2962]: E0420 20:52:42.825505 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:43.497044 kubelet[2962]: E0420 20:52:43.495530 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.086s" Apr 20 20:52:43.935790 containerd[1648]: time="2026-04-20T20:52:43.876246463Z" level=info msg="StartContainer for \"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\" returns successfully" Apr 20 20:52:44.655830 kubelet[2962]: E0420 20:52:44.651650 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 20 20:52:44.738095 kubelet[2962]: E0420 20:52:44.737483 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:44.961335 systemd[1]: Started cri-containerd-c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948.scope - libcontainer container c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948. Apr 20 20:52:46.125494 kubelet[2962]: E0420 20:52:46.107510 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:47.472787 containerd[1648]: time="2026-04-20T20:52:47.460720881Z" level=error msg="get state for c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" error="context deadline exceeded" Apr 20 20:52:47.472787 containerd[1648]: time="2026-04-20T20:52:47.474509126Z" level=warning msg="unknown status" status=0 Apr 20 20:52:47.970637 kubelet[2962]: E0420 20:52:47.968721 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:49.763996 containerd[1648]: time="2026-04-20T20:52:49.749046266Z" level=error msg="get state for c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" error="context deadline exceeded" Apr 20 20:52:49.763996 containerd[1648]: time="2026-04-20T20:52:49.766610944Z" level=warning msg="unknown status" status=0 Apr 20 20:52:52.156613 containerd[1648]: time="2026-04-20T20:52:52.154619299Z" level=error msg="get state for c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" error="context deadline exceeded" Apr 20 20:52:52.156613 containerd[1648]: time="2026-04-20T20:52:52.154874013Z" level=warning msg="unknown status" status=0 Apr 20 20:52:52.586068 containerd[1648]: time="2026-04-20T20:52:52.510015226Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 20:52:52.662882 containerd[1648]: time="2026-04-20T20:52:52.587624802Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 20 20:52:52.662882 containerd[1648]: time="2026-04-20T20:52:52.587724780Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 20:52:52.755313 kubelet[2962]: E0420 20:52:52.685321 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:53.282599 kubelet[2962]: E0420 20:52:53.269656 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:52:54.068681 containerd[1648]: time="2026-04-20T20:52:54.063957568Z" level=info msg="StartContainer for \"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\" returns successfully" Apr 20 20:52:54.363612 kubelet[2962]: E0420 20:52:54.347936 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:55.197078 kubelet[2962]: E0420 20:52:55.194807 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:56.486921 kubelet[2962]: E0420 20:52:56.484688 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:57.568049 kubelet[2962]: E0420 20:52:57.567039 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:52:58.352031 kubelet[2962]: E0420 20:52:58.351270 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:01.672988 kubelet[2962]: E0420 20:53:01.670836 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:03.702035 kubelet[2962]: E0420 20:53:03.677602 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:04.476480 kubelet[2962]: E0420 20:53:04.470707 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:08.925625 kubelet[2962]: E0420 20:53:08.909728 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:13.359675 update_engine[1623]: I20260420 20:53:13.352583 1623 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 20:53:13.359675 update_engine[1623]: I20260420 20:53:13.361776 1623 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.377516 1623 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.429790 1623 omaha_request_params.cc:62] Current group set to alpha Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.430110 1623 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.430127 1623 update_attempter.cc:643] Scheduling an action processor start. Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.441566 1623 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.449615 1623 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.460453 1623 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.460623 1623 omaha_request_action.cc:272] Request: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.460633 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.773639 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:53:13.927383 update_engine[1623]: I20260420 20:53:13.917837 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:53:13.927383 update_engine[1623]: E20260420 20:53:13.924000 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:53:13.949951 locksmithd[1697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 20:53:13.950529 update_engine[1623]: I20260420 20:53:13.930610 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 20:53:14.059512 kubelet[2962]: E0420 20:53:14.058580 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:18.727488 kubelet[2962]: E0420 20:53:18.725359 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:19.156077 kubelet[2962]: E0420 20:53:19.123955 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:24.170255 kubelet[2962]: E0420 20:53:24.162316 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:24.363568 update_engine[1623]: I20260420 20:53:24.362941 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:53:24.536752 update_engine[1623]: I20260420 20:53:24.365960 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:53:24.536752 update_engine[1623]: I20260420 20:53:24.393959 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:53:24.536752 update_engine[1623]: E20260420 20:53:24.403548 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:53:24.536752 update_engine[1623]: I20260420 20:53:24.404471 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 20:53:29.349985 kubelet[2962]: E0420 20:53:29.348907 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:31.900397 kubelet[2962]: E0420 20:53:31.898516 2962 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 20:53:33.078433 kubelet[2962]: E0420 20:53:33.074826 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:34.253799 kubelet[2962]: E0420 20:53:34.251065 2962 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 20:53:34.349681 update_engine[1623]: I20260420 20:53:34.316797 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:53:34.349681 update_engine[1623]: I20260420 20:53:34.327818 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:53:34.349681 update_engine[1623]: I20260420 20:53:34.343686 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:53:34.562631 update_engine[1623]: E20260420 20:53:34.357946 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:53:34.562631 update_engine[1623]: I20260420 20:53:34.360757 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 20:53:34.620802 kubelet[2962]: E0420 20:53:34.434249 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:39.587941 kubelet[2962]: E0420 20:53:39.584774 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:44.319817 update_engine[1623]: I20260420 20:53:44.316931 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.325455 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.357220 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:53:44.511899 update_engine[1623]: E20260420 20:53:44.362877 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.367208 1623 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.367327 1623 omaha_request_action.cc:617] Omaha request response: Apr 20 20:53:44.511899 update_engine[1623]: E20260420 20:53:44.396971 1623 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.401762 1623 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.401782 1623 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.401786 1623 update_attempter.cc:306] Processing Done. Apr 20 20:53:44.511899 update_engine[1623]: E20260420 20:53:44.402722 1623 update_attempter.cc:619] Update failed. Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.409590 1623 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.410084 1623 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.424062 1623 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.480620 1623 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.480938 1623 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 20:53:44.511899 update_engine[1623]: I20260420 20:53:44.480949 1623 omaha_request_action.cc:272] Request: Apr 20 20:53:44.511899 update_engine[1623]: Apr 20 20:53:44.511899 update_engine[1623]: Apr 20 20:53:44.938003 locksmithd[1697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 20:53:44.938003 locksmithd[1697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 20:53:44.947460 kubelet[2962]: E0420 20:53:44.924806 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:45.048765 update_engine[1623]: Apr 20 20:53:45.048765 update_engine[1623]: Apr 20 20:53:45.048765 update_engine[1623]: Apr 20 20:53:45.048765 update_engine[1623]: Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.480955 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.481011 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.481499 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 20:53:45.048765 update_engine[1623]: E20260420 20:53:44.506796 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.510591 1623 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511605 1623 omaha_request_action.cc:617] Omaha request response: Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511711 1623 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511720 1623 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511726 1623 update_attempter.cc:306] Processing Done. Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511767 1623 update_attempter.cc:310] Error event sent. Apr 20 20:53:45.048765 update_engine[1623]: I20260420 20:53:44.511821 1623 update_check_scheduler.cc:74] Next update check in 41m47s Apr 20 20:53:45.280203 containerd[1648]: time="2026-04-20T20:53:45.270770339Z" level=info msg="received container exit event container_id:\"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\" id:\"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\" pid:3296 exit_status:1 exited_at:{seconds:1776718425 nanos:233648155}" Apr 20 20:53:45.271748 systemd[1]: cri-containerd-8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8.scope: Deactivated successfully. Apr 20 20:53:45.372663 systemd[1]: cri-containerd-8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8.scope: Consumed 19.538s CPU time, 18.6M memory peak. Apr 20 20:53:49.161387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8-rootfs.mount: Deactivated successfully. Apr 20 20:53:50.079980 kubelet[2962]: E0420 20:53:50.078595 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:50.503471 kubelet[2962]: I0420 20:53:50.480484 2962 scope.go:122] "RemoveContainer" containerID="7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67" Apr 20 20:53:50.639983 kubelet[2962]: I0420 20:53:50.637798 2962 scope.go:122] "RemoveContainer" containerID="8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" Apr 20 20:53:50.675029 kubelet[2962]: E0420 20:53:50.657662 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:50.841522 kubelet[2962]: E0420 20:53:50.740126 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:53:51.055507 containerd[1648]: time="2026-04-20T20:53:51.048786155Z" level=info msg="RemoveContainer for \"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\"" Apr 20 20:53:51.528923 containerd[1648]: time="2026-04-20T20:53:51.526095134Z" level=info msg="RemoveContainer for \"7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67\" returns successfully" Apr 20 20:53:54.141826 kubelet[2962]: E0420 20:53:54.140747 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.825s" Apr 20 20:53:54.364329 kubelet[2962]: I0420 20:53:54.363837 2962 scope.go:122] "RemoveContainer" containerID="8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" Apr 20 20:53:54.564981 kubelet[2962]: E0420 20:53:54.478664 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:54.564981 kubelet[2962]: E0420 20:53:54.562617 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:53:55.299932 kubelet[2962]: E0420 20:53:55.296485 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:53:58.447722 kubelet[2962]: E0420 20:53:58.443038 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:53:58.706958 systemd[1]: cri-containerd-c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948.scope: Deactivated successfully. Apr 20 20:53:58.790385 containerd[1648]: time="2026-04-20T20:53:58.754396960Z" level=info msg="received container exit event container_id:\"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\" id:\"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\" pid:3338 exit_status:1 exited_at:{seconds:1776718438 nanos:696441029}" Apr 20 20:53:58.770443 systemd[1]: cri-containerd-c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948.scope: Consumed 26.256s CPU time, 18.7M memory peak. Apr 20 20:54:00.389646 kubelet[2962]: E0420 20:54:00.385015 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:00.672347 kubelet[2962]: E0420 20:54:00.627823 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:01.755496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948-rootfs.mount: Deactivated successfully. Apr 20 20:54:02.329302 kubelet[2962]: E0420 20:54:02.328781 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:02.508071 kubelet[2962]: I0420 20:54:02.407899 2962 scope.go:122] "RemoveContainer" containerID="21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5" Apr 20 20:54:02.684552 kubelet[2962]: I0420 20:54:02.667753 2962 scope.go:122] "RemoveContainer" containerID="c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" Apr 20 20:54:02.684552 kubelet[2962]: E0420 20:54:02.677733 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:02.756234 kubelet[2962]: E0420 20:54:02.711700 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:54:03.078034 containerd[1648]: time="2026-04-20T20:54:02.981682158Z" level=info msg="RemoveContainer for \"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\"" Apr 20 20:54:03.159607 containerd[1648]: time="2026-04-20T20:54:03.159122469Z" level=info msg="RemoveContainer for \"21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5\" returns successfully" Apr 20 20:54:04.454834 kubelet[2962]: I0420 20:54:04.454057 2962 scope.go:122] "RemoveContainer" containerID="c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" Apr 20 20:54:04.570986 kubelet[2962]: E0420 20:54:04.463722 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:04.570986 kubelet[2962]: E0420 20:54:04.491725 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:54:05.558588 kubelet[2962]: E0420 20:54:05.557384 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:09.396556 kubelet[2962]: I0420 20:54:09.395549 2962 scope.go:122] "RemoveContainer" containerID="8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" Apr 20 20:54:09.628980 kubelet[2962]: E0420 20:54:09.441032 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:10.068084 containerd[1648]: time="2026-04-20T20:54:10.066115441Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:4" Apr 20 20:54:10.592973 containerd[1648]: time="2026-04-20T20:54:10.592662185Z" level=info msg="Container ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:54:11.003556 kubelet[2962]: E0420 20:54:10.878010 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:11.855022 containerd[1648]: time="2026-04-20T20:54:11.844967746Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:4 returns container id \"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\"" Apr 20 20:54:12.065887 containerd[1648]: time="2026-04-20T20:54:12.061852169Z" level=info msg="StartContainer for \"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\"" Apr 20 20:54:12.158515 containerd[1648]: time="2026-04-20T20:54:12.084486786Z" level=info msg="container event discarded" container=8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270 type=CONTAINER_STOPPED_EVENT Apr 20 20:54:12.379036 kubelet[2962]: E0420 20:54:12.374044 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.099s" Apr 20 20:54:12.610206 containerd[1648]: time="2026-04-20T20:54:12.550518478Z" level=info msg="connecting to shim ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:54:14.306492 kubelet[2962]: E0420 20:54:14.299964 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.018s" Apr 20 20:54:15.673413 containerd[1648]: time="2026-04-20T20:54:15.667035705Z" level=info msg="container event discarded" container=38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b type=CONTAINER_CREATED_EVENT Apr 20 20:54:16.077015 kubelet[2962]: E0420 20:54:16.053094 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:16.146990 systemd[1]: Started cri-containerd-ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425.scope - libcontainer container ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425. Apr 20 20:54:18.251642 containerd[1648]: time="2026-04-20T20:54:18.248834079Z" level=error msg="get state for ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" error="context deadline exceeded" Apr 20 20:54:18.251642 containerd[1648]: time="2026-04-20T20:54:18.250843957Z" level=warning msg="unknown status" status=0 Apr 20 20:54:19.066691 containerd[1648]: time="2026-04-20T20:54:19.058555932Z" level=info msg="container event discarded" container=38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b type=CONTAINER_STARTED_EVENT Apr 20 20:54:20.602083 containerd[1648]: time="2026-04-20T20:54:20.594275077Z" level=error msg="get state for ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" error="context deadline exceeded" Apr 20 20:54:20.602083 containerd[1648]: time="2026-04-20T20:54:20.601601379Z" level=warning msg="unknown status" status=0 Apr 20 20:54:21.174589 kubelet[2962]: E0420 20:54:21.166730 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:21.681803 kubelet[2962]: I0420 20:54:21.679027 2962 scope.go:122] "RemoveContainer" containerID="c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" Apr 20 20:54:21.780017 kubelet[2962]: E0420 20:54:21.730648 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:22.217333 containerd[1648]: time="2026-04-20T20:54:22.216931488Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\" attempt:3" Apr 20 20:54:22.553394 containerd[1648]: time="2026-04-20T20:54:22.544681381Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:54:22.553394 containerd[1648]: time="2026-04-20T20:54:22.547492409Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 20:54:23.163107 containerd[1648]: time="2026-04-20T20:54:23.159937262Z" level=info msg="Container ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:54:23.878219 containerd[1648]: time="2026-04-20T20:54:23.871428746Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" attempt:3 returns container id \"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\"" Apr 20 20:54:24.268807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860545580.mount: Deactivated successfully. Apr 20 20:54:24.465110 containerd[1648]: time="2026-04-20T20:54:24.379553411Z" level=info msg="StartContainer for \"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\"" Apr 20 20:54:24.665635 containerd[1648]: time="2026-04-20T20:54:24.650969894Z" level=info msg="connecting to shim ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 20:54:25.449472 kubelet[2962]: E0420 20:54:25.448615 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.113s" Apr 20 20:54:26.250716 kubelet[2962]: E0420 20:54:26.246238 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:26.485940 containerd[1648]: time="2026-04-20T20:54:26.475281700Z" level=info msg="StartContainer for \"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\" returns successfully" Apr 20 20:54:28.663201 kubelet[2962]: E0420 20:54:28.662957 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:28.903061 systemd[1]: Started cri-containerd-ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70.scope - libcontainer container ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70. Apr 20 20:54:30.583320 kubelet[2962]: I0420 20:54:30.581948 2962 scope.go:122] "RemoveContainer" containerID="c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948" Apr 20 20:54:30.934832 containerd[1648]: time="2026-04-20T20:54:30.901049657Z" level=info msg="RemoveContainer for \"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\"" Apr 20 20:54:31.347724 kubelet[2962]: E0420 20:54:31.335670 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:31.849893 containerd[1648]: time="2026-04-20T20:54:31.848738512Z" level=error msg="get state for ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" error="context deadline exceeded" Apr 20 20:54:31.849893 containerd[1648]: time="2026-04-20T20:54:31.848814451Z" level=warning msg="unknown status" status=0 Apr 20 20:54:32.467619 containerd[1648]: time="2026-04-20T20:54:32.466819014Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 20:54:32.947603 containerd[1648]: time="2026-04-20T20:54:32.941597274Z" level=info msg="RemoveContainer for \"c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948\" returns successfully" Apr 20 20:54:32.991119 kubelet[2962]: E0420 20:54:32.987981 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:33.355578 containerd[1648]: time="2026-04-20T20:54:33.354080126Z" level=info msg="StartContainer for \"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\" returns successfully" Apr 20 20:54:34.595083 kubelet[2962]: E0420 20:54:34.575654 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:35.856604 kubelet[2962]: E0420 20:54:35.853927 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:36.450599 kubelet[2962]: E0420 20:54:36.447379 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:40.787663 kubelet[2962]: E0420 20:54:40.781088 2962 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 20:54:41.577413 kubelet[2962]: E0420 20:54:41.573068 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:41.959785 kubelet[2962]: E0420 20:54:41.948651 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:43.621934 kubelet[2962]: E0420 20:54:43.619503 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:46.641611 kubelet[2962]: E0420 20:54:46.638808 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:51.714609 kubelet[2962]: E0420 20:54:51.713468 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:54:53.566622 kubelet[2962]: E0420 20:54:53.564862 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:54.881709 kubelet[2962]: E0420 20:54:54.880673 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:54:56.802173 kubelet[2962]: E0420 20:54:56.797685 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:01.863907 kubelet[2962]: E0420 20:55:01.857093 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:06.970695 kubelet[2962]: E0420 20:55:06.946354 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:12.234026 kubelet[2962]: E0420 20:55:12.228591 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:13.376882 kubelet[2962]: E0420 20:55:13.376440 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:55:17.333593 kubelet[2962]: E0420 20:55:17.332239 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:19.625003 kubelet[2962]: E0420 20:55:19.623773 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:55:22.488554 kubelet[2962]: E0420 20:55:22.455601 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:27.644048 kubelet[2962]: E0420 20:55:27.636676 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:32.695338 kubelet[2962]: E0420 20:55:32.687127 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:37.833395 kubelet[2962]: E0420 20:55:37.831012 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:42.873097 kubelet[2962]: E0420 20:55:42.862840 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:48.028941 kubelet[2962]: E0420 20:55:47.994770 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:48.486991 kubelet[2962]: E0420 20:55:48.387045 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:55:53.376857 kubelet[2962]: E0420 20:55:53.376003 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:55:58.431373 kubelet[2962]: E0420 20:55:58.426970 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:03.485066 kubelet[2962]: E0420 20:56:03.483785 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:07.601585 kubelet[2962]: E0420 20:56:07.584864 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:08.529190 kubelet[2962]: E0420 20:56:08.524002 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:13.629351 kubelet[2962]: E0420 20:56:13.627447 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:15.640040 containerd[1648]: time="2026-04-20T20:56:15.638285216Z" level=info msg="container event discarded" container=38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b type=CONTAINER_STOPPED_EVENT Apr 20 20:56:18.492115 containerd[1648]: time="2026-04-20T20:56:18.489970901Z" level=info msg="container event discarded" container=8d8c7f56935bcdf552c7dcb0ee3df7b429b4ec252ba46d58cedc616076770270 type=CONTAINER_DELETED_EVENT Apr 20 20:56:18.690254 kubelet[2962]: E0420 20:56:18.682914 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:19.851061 containerd[1648]: time="2026-04-20T20:56:19.849708649Z" level=info msg="container event discarded" container=6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d type=CONTAINER_STOPPED_EVENT Apr 20 20:56:20.046964 containerd[1648]: time="2026-04-20T20:56:20.044031638Z" level=info msg="container event discarded" container=7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67 type=CONTAINER_CREATED_EVENT Apr 20 20:56:21.289692 kubelet[2962]: E0420 20:56:21.286715 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:23.145114 containerd[1648]: time="2026-04-20T20:56:23.143897036Z" level=info msg="container event discarded" container=21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5 type=CONTAINER_CREATED_EVENT Apr 20 20:56:23.867613 kubelet[2962]: E0420 20:56:23.866384 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:24.849978 containerd[1648]: time="2026-04-20T20:56:24.846844887Z" level=info msg="container event discarded" container=7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67 type=CONTAINER_STARTED_EVENT Apr 20 20:56:25.073312 containerd[1648]: time="2026-04-20T20:56:25.065617877Z" level=info msg="container event discarded" container=4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a type=CONTAINER_CREATED_EVENT Apr 20 20:56:25.073312 containerd[1648]: time="2026-04-20T20:56:25.072024070Z" level=info msg="container event discarded" container=4ddfe878b566c84e1432e98f1bd77fa8eec947a7e4e73b474cb911b96767315a type=CONTAINER_STARTED_EVENT Apr 20 20:56:25.193659 containerd[1648]: time="2026-04-20T20:56:25.158588841Z" level=info msg="container event discarded" container=38fc99bfb037f7d749df8869849e08a52d710890f2b0e5fbfd27a719d68ef10b type=CONTAINER_DELETED_EVENT Apr 20 20:56:27.247258 containerd[1648]: time="2026-04-20T20:56:27.209794588Z" level=info msg="container event discarded" container=c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17 type=CONTAINER_CREATED_EVENT Apr 20 20:56:27.978758 containerd[1648]: time="2026-04-20T20:56:27.974116490Z" level=info msg="container event discarded" container=6b69576dba9a0b0c0521b4322f5af7087be2f321b15b5a87dc1973ba93efa80d type=CONTAINER_DELETED_EVENT Apr 20 20:56:28.965827 kubelet[2962]: E0420 20:56:28.964256 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:31.648983 containerd[1648]: time="2026-04-20T20:56:31.646905431Z" level=info msg="container event discarded" container=21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5 type=CONTAINER_STARTED_EVENT Apr 20 20:56:34.089634 kubelet[2962]: E0420 20:56:34.066999 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:34.388413 kubelet[2962]: I0420 20:56:34.373027 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/81b75921-1f05-4ead-84ce-450f9339eb4c-run\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.388413 kubelet[2962]: I0420 20:56:34.373293 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/81b75921-1f05-4ead-84ce-450f9339eb4c-cni-plugin\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.388413 kubelet[2962]: I0420 20:56:34.373434 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/81b75921-1f05-4ead-84ce-450f9339eb4c-cni\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.388413 kubelet[2962]: I0420 20:56:34.373447 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/81b75921-1f05-4ead-84ce-450f9339eb4c-flannel-cfg\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.388413 kubelet[2962]: I0420 20:56:34.373518 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81b75921-1f05-4ead-84ce-450f9339eb4c-xtables-lock\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.439833 kubelet[2962]: I0420 20:56:34.373531 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwrg\" (UniqueName: \"kubernetes.io/projected/81b75921-1f05-4ead-84ce-450f9339eb4c-kube-api-access-7pwrg\") pod \"kube-flannel-ds-lpvvz\" (UID: \"81b75921-1f05-4ead-84ce-450f9339eb4c\") " pod="kube-flannel/kube-flannel-ds-lpvvz" Apr 20 20:56:34.448174 kubelet[2962]: E0420 20:56:34.447817 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:35.030365 systemd[1]: Created slice kubepods-burstable-pod81b75921_1f05_4ead_84ce_450f9339eb4c.slice - libcontainer container kubepods-burstable-pod81b75921_1f05_4ead_84ce_450f9339eb4c.slice. Apr 20 20:56:35.658229 kubelet[2962]: E0420 20:56:35.656071 2962 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:56:35.709499 kubelet[2962]: E0420 20:56:35.700101 2962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81b75921-1f05-4ead-84ce-450f9339eb4c-flannel-cfg podName:81b75921-1f05-4ead-84ce-450f9339eb4c nodeName:}" failed. No retries permitted until 2026-04-20 20:56:36.194700473 +0000 UTC m=+374.267420597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/81b75921-1f05-4ead-84ce-450f9339eb4c-flannel-cfg") pod "kube-flannel-ds-lpvvz" (UID: "81b75921-1f05-4ead-84ce-450f9339eb4c") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:56:37.366699 kubelet[2962]: E0420 20:56:37.365489 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:37.441536 containerd[1648]: time="2026-04-20T20:56:37.425076678Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-lpvvz\" uid:\"81b75921-1f05-4ead-84ce-450f9339eb4c\" namespace:\"kube-flannel\"" Apr 20 20:56:38.543024 containerd[1648]: time="2026-04-20T20:56:38.510121495Z" level=info msg="container event discarded" container=c6d644b0ebdcc8b33102fc847055fd0c5c4181badbf1e839b2cde2d8294bdf17 type=CONTAINER_STARTED_EVENT Apr 20 20:56:39.055370 containerd[1648]: time="2026-04-20T20:56:39.053745413Z" level=info msg="connecting to shim f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c" address="unix:///run/containerd/s/69d725090989edb81f3b60f80dd39349600373585414e67b801af9d17d57b839" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:56:39.251693 kubelet[2962]: E0420 20:56:39.247764 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:42.957260 systemd[1]: Started cri-containerd-f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c.scope - libcontainer container f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c. Apr 20 20:56:44.533069 kubelet[2962]: E0420 20:56:44.531409 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:46.075745 containerd[1648]: time="2026-04-20T20:56:46.073189667Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-lpvvz\" uid:\"81b75921-1f05-4ead-84ce-450f9339eb4c\" namespace:\"kube-flannel\" returns sandbox id \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\"" Apr 20 20:56:46.248752 kubelet[2962]: E0420 20:56:46.248368 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:46.430082 containerd[1648]: time="2026-04-20T20:56:46.376667524Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 20:56:49.973434 kubelet[2962]: E0420 20:56:49.910125 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:53.797306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138849408.mount: Deactivated successfully. Apr 20 20:56:54.687631 containerd[1648]: time="2026-04-20T20:56:54.685989875Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:56:54.689905 containerd[1648]: time="2026-04-20T20:56:54.688055408Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=3641610" Apr 20 20:56:54.696477 containerd[1648]: time="2026-04-20T20:56:54.696121506Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:56:54.766785 containerd[1648]: time="2026-04-20T20:56:54.764702574Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:56:54.797336 containerd[1648]: time="2026-04-20T20:56:54.796870507Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 8.420152135s" Apr 20 20:56:54.797336 containerd[1648]: time="2026-04-20T20:56:54.796942464Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 20:56:54.905578 containerd[1648]: time="2026-04-20T20:56:54.904019803Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for container name:\"install-cni-plugin\"" Apr 20 20:56:54.992977 kubelet[2962]: E0420 20:56:54.992418 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:56:55.119447 containerd[1648]: time="2026-04-20T20:56:55.119330888Z" level=info msg="Container 1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:56:55.141272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826099690.mount: Deactivated successfully. Apr 20 20:56:55.191595 containerd[1648]: time="2026-04-20T20:56:55.189690073Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for name:\"install-cni-plugin\" returns container id \"1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e\"" Apr 20 20:56:55.351950 containerd[1648]: time="2026-04-20T20:56:55.346559476Z" level=info msg="StartContainer for \"1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e\"" Apr 20 20:56:55.364110 containerd[1648]: time="2026-04-20T20:56:55.364076410Z" level=info msg="connecting to shim 1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e" address="unix:///run/containerd/s/69d725090989edb81f3b60f80dd39349600373585414e67b801af9d17d57b839" protocol=ttrpc version=3 Apr 20 20:56:55.705380 systemd[1]: Started cri-containerd-1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e.scope - libcontainer container 1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e. Apr 20 20:56:56.124304 systemd[1]: cri-containerd-1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e.scope: Deactivated successfully. Apr 20 20:56:56.145554 containerd[1648]: time="2026-04-20T20:56:56.144413024Z" level=info msg="StartContainer for \"1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e\" returns successfully" Apr 20 20:56:56.157769 containerd[1648]: time="2026-04-20T20:56:56.156633132Z" level=info msg="received container exit event container_id:\"1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e\" id:\"1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e\" pid:3677 exited_at:{seconds:1776718616 nanos:151525428}" Apr 20 20:56:56.546658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e-rootfs.mount: Deactivated successfully. Apr 20 20:56:57.479277 kubelet[2962]: E0420 20:56:57.478714 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:56:57.506848 containerd[1648]: time="2026-04-20T20:56:57.506158446Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 20:57:00.067857 kubelet[2962]: E0420 20:57:00.063890 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:57:05.174054 kubelet[2962]: E0420 20:57:05.169599 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:57:09.350517 kubelet[2962]: E0420 20:57:09.349807 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:10.208640 kubelet[2962]: E0420 20:57:10.207603 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:57:13.436585 containerd[1648]: time="2026-04-20T20:57:13.436104256Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:57:13.518738 containerd[1648]: time="2026-04-20T20:57:13.450897214Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28420318" Apr 20 20:57:13.518738 containerd[1648]: time="2026-04-20T20:57:13.516600179Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:57:13.534371 containerd[1648]: time="2026-04-20T20:57:13.533168418Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:57:13.573963 containerd[1648]: time="2026-04-20T20:57:13.573299125Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 16.066972516s" Apr 20 20:57:13.573963 containerd[1648]: time="2026-04-20T20:57:13.573360285Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 20:57:13.881749 containerd[1648]: time="2026-04-20T20:57:13.880269140Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for container name:\"install-cni\"" Apr 20 20:57:14.202074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426275564.mount: Deactivated successfully. Apr 20 20:57:14.409805 containerd[1648]: time="2026-04-20T20:57:14.406722235Z" level=info msg="Container 222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:57:14.482429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915225189.mount: Deactivated successfully. Apr 20 20:57:14.643008 containerd[1648]: time="2026-04-20T20:57:14.625909649Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for name:\"install-cni\" returns container id \"222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71\"" Apr 20 20:57:14.652911 containerd[1648]: time="2026-04-20T20:57:14.647752443Z" level=info msg="StartContainer for \"222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71\"" Apr 20 20:57:14.661493 containerd[1648]: time="2026-04-20T20:57:14.658765091Z" level=info msg="connecting to shim 222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71" address="unix:///run/containerd/s/69d725090989edb81f3b60f80dd39349600373585414e67b801af9d17d57b839" protocol=ttrpc version=3 Apr 20 20:57:15.216586 systemd[1]: Started cri-containerd-222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71.scope - libcontainer container 222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71. Apr 20 20:57:15.297608 kubelet[2962]: E0420 20:57:15.287100 2962 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 20:57:16.242315 systemd[1]: cri-containerd-222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71.scope: Deactivated successfully. Apr 20 20:57:16.365324 containerd[1648]: time="2026-04-20T20:57:16.265756724Z" level=info msg="StartContainer for \"222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71\" returns successfully" Apr 20 20:57:16.365324 containerd[1648]: time="2026-04-20T20:57:16.269604188Z" level=info msg="received container exit event container_id:\"222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71\" id:\"222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71\" pid:3755 exited_at:{seconds:1776718636 nanos:266101508}" Apr 20 20:57:16.743499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71-rootfs.mount: Deactivated successfully. Apr 20 20:57:17.200673 kubelet[2962]: E0420 20:57:17.198658 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:17.444461 containerd[1648]: time="2026-04-20T20:57:17.443869245Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for container name:\"kube-flannel\"" Apr 20 20:57:17.744869 containerd[1648]: time="2026-04-20T20:57:17.743922051Z" level=info msg="Container cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:57:17.762697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994445723.mount: Deactivated successfully. Apr 20 20:57:17.999042 containerd[1648]: time="2026-04-20T20:57:17.996828583Z" level=info msg="CreateContainer within sandbox \"f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c\" for name:\"kube-flannel\" returns container id \"cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097\"" Apr 20 20:57:18.042356 containerd[1648]: time="2026-04-20T20:57:18.031695443Z" level=info msg="StartContainer for \"cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097\"" Apr 20 20:57:18.130253 containerd[1648]: time="2026-04-20T20:57:18.127032899Z" level=info msg="connecting to shim cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097" address="unix:///run/containerd/s/69d725090989edb81f3b60f80dd39349600373585414e67b801af9d17d57b839" protocol=ttrpc version=3 Apr 20 20:57:18.478452 systemd[1]: Started cri-containerd-cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097.scope - libcontainer container cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097. Apr 20 20:57:18.955531 containerd[1648]: time="2026-04-20T20:57:18.950597341Z" level=info msg="container event discarded" container=7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67 type=CONTAINER_STOPPED_EVENT Apr 20 20:57:19.426224 containerd[1648]: time="2026-04-20T20:57:19.425777136Z" level=info msg="StartContainer for \"cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097\" returns successfully" Apr 20 20:57:19.951614 kubelet[2962]: E0420 20:57:19.951365 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:21.005973 kubelet[2962]: E0420 20:57:21.005828 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:22.649867 systemd-networkd[1434]: flannel.1: Link UP Apr 20 20:57:22.651461 systemd-networkd[1434]: flannel.1: Gained carrier Apr 20 20:57:24.308525 systemd-networkd[1434]: flannel.1: Gained IPv6LL Apr 20 20:57:24.598295 kubelet[2962]: I0420 20:57:24.596573 2962 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lpvvz" podStartSLOduration=23.697407006 podStartE2EDuration="54.596551899s" podCreationTimestamp="2026-04-20 20:56:30 +0000 UTC" firstStartedPulling="2026-04-20 20:56:46.369219586 +0000 UTC m=+384.441939697" lastFinishedPulling="2026-04-20 20:57:17.268364477 +0000 UTC m=+415.341084590" observedRunningTime="2026-04-20 20:57:20.46779808 +0000 UTC m=+418.540518195" watchObservedRunningTime="2026-04-20 20:57:24.596551899 +0000 UTC m=+422.669272009" Apr 20 20:57:24.806585 kubelet[2962]: I0420 20:57:24.805315 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbml\" (UniqueName: \"kubernetes.io/projected/bbde19bf-4580-42df-b4e3-a15383ce2354-kube-api-access-bnbml\") pod \"coredns-7d764666f9-hkshx\" (UID: \"bbde19bf-4580-42df-b4e3-a15383ce2354\") " pod="kube-system/coredns-7d764666f9-hkshx" Apr 20 20:57:24.808858 kubelet[2962]: I0420 20:57:24.808651 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbde19bf-4580-42df-b4e3-a15383ce2354-config-volume\") pod \"coredns-7d764666f9-hkshx\" (UID: \"bbde19bf-4580-42df-b4e3-a15383ce2354\") " pod="kube-system/coredns-7d764666f9-hkshx" Apr 20 20:57:25.031276 kubelet[2962]: I0420 20:57:25.029399 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbfe7b07-cca1-4351-90f0-3deae355ced1-config-volume\") pod \"coredns-7d764666f9-k2l74\" (UID: \"dbfe7b07-cca1-4351-90f0-3deae355ced1\") " pod="kube-system/coredns-7d764666f9-k2l74" Apr 20 20:57:25.031276 kubelet[2962]: I0420 20:57:25.029527 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdbn\" (UniqueName: \"kubernetes.io/projected/dbfe7b07-cca1-4351-90f0-3deae355ced1-kube-api-access-dbdbn\") pod \"coredns-7d764666f9-k2l74\" (UID: \"dbfe7b07-cca1-4351-90f0-3deae355ced1\") " pod="kube-system/coredns-7d764666f9-k2l74" Apr 20 20:57:25.031395 systemd[1]: Created slice kubepods-burstable-poddbfe7b07_cca1_4351_90f0_3deae355ced1.slice - libcontainer container kubepods-burstable-poddbfe7b07_cca1_4351_90f0_3deae355ced1.slice. Apr 20 20:57:25.157651 systemd[1]: Created slice kubepods-burstable-podbbde19bf_4580_42df_b4e3_a15383ce2354.slice - libcontainer container kubepods-burstable-podbbde19bf_4580_42df_b4e3_a15383ce2354.slice. Apr 20 20:57:25.488535 kubelet[2962]: E0420 20:57:25.488237 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:25.557455 containerd[1648]: time="2026-04-20T20:57:25.553208057Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-hkshx\" uid:\"bbde19bf-4580-42df-b4e3-a15383ce2354\" namespace:\"kube-system\"" Apr 20 20:57:25.818085 kubelet[2962]: E0420 20:57:25.791384 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:25.898323 containerd[1648]: time="2026-04-20T20:57:25.889961971Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-k2l74\" uid:\"dbfe7b07-cca1-4351-90f0-3deae355ced1\" namespace:\"kube-system\"" Apr 20 20:57:26.036237 systemd-networkd[1434]: cni0: Link UP Apr 20 20:57:26.039384 systemd-networkd[1434]: cni0: Gained carrier Apr 20 20:57:26.084250 systemd-networkd[1434]: veth96aab2a9: Link UP Apr 20 20:57:26.188686 kernel: cni0: port 1(veth3bc93f5b) entered blocking state Apr 20 20:57:26.227483 kernel: cni0: port 1(veth3bc93f5b) entered disabled state Apr 20 20:57:26.227780 kernel: veth3bc93f5b: entered allmulticast mode Apr 20 20:57:26.227806 kernel: veth3bc93f5b: entered promiscuous mode Apr 20 20:57:26.233869 systemd-networkd[1434]: veth3bc93f5b: Link UP Apr 20 20:57:26.238097 kernel: cni0: port 2(veth96aab2a9) entered blocking state Apr 20 20:57:26.236762 systemd-networkd[1434]: cni0: Lost carrier Apr 20 20:57:26.238236 kernel: cni0: port 2(veth96aab2a9) entered disabled state Apr 20 20:57:26.251463 kernel: veth96aab2a9: entered allmulticast mode Apr 20 20:57:26.262559 kernel: veth96aab2a9: entered promiscuous mode Apr 20 20:57:26.505850 kernel: cni0: port 1(veth3bc93f5b) entered blocking state Apr 20 20:57:26.507058 kernel: cni0: port 1(veth3bc93f5b) entered forwarding state Apr 20 20:57:26.540507 systemd-networkd[1434]: veth3bc93f5b: Gained carrier Apr 20 20:57:26.549946 systemd-networkd[1434]: cni0: Gained carrier Apr 20 20:57:26.749328 kernel: cni0: port 2(veth96aab2a9) entered blocking state Apr 20 20:57:26.748623 systemd-networkd[1434]: veth96aab2a9: Gained carrier Apr 20 20:57:26.906613 containerd[1648]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a48e0), "name":"cbr0", "type":"bridge"} Apr 20 20:57:26.906613 containerd[1648]: delegateAdd: netconf sent to delegate plugin: Apr 20 20:57:26.906613 containerd[1648]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 20 20:57:26.906613 containerd[1648]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106800), "name":"cbr0", "type":"bridge"} Apr 20 20:57:26.906613 containerd[1648]: delegateAdd: netconf sent to delegate plugin: Apr 20 20:57:26.910813 kernel: cni0: port 2(veth96aab2a9) entered forwarding state Apr 20 20:57:27.118062 containerd[1648]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T20:57:27.117074764Z" level=info msg="connecting to shim 0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020" address="unix:///run/containerd/s/7ff55851a877269a42f5804eefc688347e40a61c49af4e45e867ef8802026715" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:57:27.159795 containerd[1648]: time="2026-04-20T20:57:27.159284146Z" level=info msg="connecting to shim 39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208" address="unix:///run/containerd/s/539c180fc96ce0fced44b07f0822e18a0057308ed3a669125d2d89d2c685d0c7" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:57:27.586755 systemd[1]: Started cri-containerd-0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020.scope - libcontainer container 0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020. Apr 20 20:57:27.710262 systemd[1]: Started cri-containerd-39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208.scope - libcontainer container 39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208. Apr 20 20:57:27.756843 systemd-networkd[1434]: cni0: Gained IPv6LL Apr 20 20:57:27.817192 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 20:57:27.851829 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 20:57:28.123654 containerd[1648]: time="2026-04-20T20:57:28.122518143Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-k2l74\" uid:\"dbfe7b07-cca1-4351-90f0-3deae355ced1\" namespace:\"kube-system\" returns sandbox id \"0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020\"" Apr 20 20:57:28.141273 systemd-networkd[1434]: veth3bc93f5b: Gained IPv6LL Apr 20 20:57:28.153989 containerd[1648]: time="2026-04-20T20:57:28.149860250Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-hkshx\" uid:\"bbde19bf-4580-42df-b4e3-a15383ce2354\" namespace:\"kube-system\" returns sandbox id \"39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208\"" Apr 20 20:57:28.193308 kubelet[2962]: E0420 20:57:28.192799 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:28.199053 kubelet[2962]: E0420 20:57:28.194763 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:28.345394 containerd[1648]: time="2026-04-20T20:57:28.343938146Z" level=info msg="CreateContainer within sandbox \"39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208\" for container name:\"coredns\"" Apr 20 20:57:28.362900 containerd[1648]: time="2026-04-20T20:57:28.361776042Z" level=info msg="CreateContainer within sandbox \"0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020\" for container name:\"coredns\"" Apr 20 20:57:28.523399 systemd-networkd[1434]: veth96aab2a9: Gained IPv6LL Apr 20 20:57:28.747248 containerd[1648]: time="2026-04-20T20:57:28.746276081Z" level=info msg="Container aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:57:28.750616 containerd[1648]: time="2026-04-20T20:57:28.749612485Z" level=info msg="Container d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:57:28.923908 containerd[1648]: time="2026-04-20T20:57:28.918591345Z" level=info msg="CreateContainer within sandbox \"0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020\" for name:\"coredns\" returns container id \"aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc\"" Apr 20 20:57:28.923908 containerd[1648]: time="2026-04-20T20:57:28.921675071Z" level=info msg="StartContainer for \"aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc\"" Apr 20 20:57:28.965083 containerd[1648]: time="2026-04-20T20:57:28.956479704Z" level=info msg="connecting to shim aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc" address="unix:///run/containerd/s/7ff55851a877269a42f5804eefc688347e40a61c49af4e45e867ef8802026715" protocol=ttrpc version=3 Apr 20 20:57:29.061984 containerd[1648]: time="2026-04-20T20:57:29.060852548Z" level=info msg="CreateContainer within sandbox \"39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208\" for name:\"coredns\" returns container id \"d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74\"" Apr 20 20:57:29.093203 containerd[1648]: time="2026-04-20T20:57:29.093128111Z" level=info msg="StartContainer for \"d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74\"" Apr 20 20:57:29.165288 containerd[1648]: time="2026-04-20T20:57:29.164717719Z" level=info msg="connecting to shim d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74" address="unix:///run/containerd/s/539c180fc96ce0fced44b07f0822e18a0057308ed3a669125d2d89d2c685d0c7" protocol=ttrpc version=3 Apr 20 20:57:29.925687 containerd[1648]: time="2026-04-20T20:57:29.919074429Z" level=info msg="container event discarded" container=8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8 type=CONTAINER_CREATED_EVENT Apr 20 20:57:29.938256 systemd[1]: Started cri-containerd-aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc.scope - libcontainer container aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc. Apr 20 20:57:30.499016 systemd[1]: Started cri-containerd-d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74.scope - libcontainer container d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74. Apr 20 20:57:31.388416 kubelet[2962]: E0420 20:57:31.386586 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:32.965024 containerd[1648]: time="2026-04-20T20:57:32.959802585Z" level=info msg="StartContainer for \"aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc\" returns successfully" Apr 20 20:57:33.076038 containerd[1648]: time="2026-04-20T20:57:32.961000302Z" level=info msg="StartContainer for \"d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74\" returns successfully" Apr 20 20:57:33.347832 containerd[1648]: time="2026-04-20T20:57:33.342950497Z" level=info msg="container event discarded" container=21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5 type=CONTAINER_STOPPED_EVENT Apr 20 20:57:35.706397 kubelet[2962]: E0420 20:57:35.700634 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:35.802766 kubelet[2962]: E0420 20:57:35.731037 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:36.696437 kubelet[2962]: E0420 20:57:36.694193 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:36.710343 kubelet[2962]: E0420 20:57:36.695039 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:37.523737 containerd[1648]: time="2026-04-20T20:57:37.513881895Z" level=info msg="container event discarded" container=c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948 type=CONTAINER_CREATED_EVENT Apr 20 20:57:37.762547 kubelet[2962]: E0420 20:57:37.762099 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:37.856997 kubelet[2962]: E0420 20:57:37.853092 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:40.208921 kubelet[2962]: I0420 20:57:40.203555 2962 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-k2l74" podStartSLOduration=71.203363641 podStartE2EDuration="1m11.203363641s" podCreationTimestamp="2026-04-20 20:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:57:40.202917338 +0000 UTC m=+438.275637464" watchObservedRunningTime="2026-04-20 20:57:40.203363641 +0000 UTC m=+438.276083756" Apr 20 20:57:43.481691 containerd[1648]: time="2026-04-20T20:57:43.480113423Z" level=info msg="container event discarded" container=8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8 type=CONTAINER_STARTED_EVENT Apr 20 20:57:45.304872 kubelet[2962]: E0420 20:57:45.301386 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:47.868224 kubelet[2962]: E0420 20:57:47.867900 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:47.883619 kubelet[2962]: E0420 20:57:47.869561 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:57:48.859427 kubelet[2962]: I0420 20:57:48.858862 2962 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hkshx" podStartSLOduration=73.858821807 podStartE2EDuration="1m13.858821807s" podCreationTimestamp="2026-04-20 20:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:57:41.105093475 +0000 UTC m=+439.177813598" watchObservedRunningTime="2026-04-20 20:57:48.858821807 +0000 UTC m=+446.931541913" Apr 20 20:57:53.971988 containerd[1648]: time="2026-04-20T20:57:53.971417524Z" level=info msg="container event discarded" container=c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948 type=CONTAINER_STARTED_EVENT Apr 20 20:57:56.314098 kubelet[2962]: E0420 20:57:56.308939 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:23.439951 kubelet[2962]: E0420 20:58:23.438438 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:39.561659 kubelet[2962]: E0420 20:58:39.545025 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:44.155180 kubelet[2962]: E0420 20:58:44.153357 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:48.820654 containerd[1648]: time="2026-04-20T20:58:48.811582343Z" level=info msg="container event discarded" container=8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8 type=CONTAINER_STOPPED_EVENT Apr 20 20:58:50.364577 kubelet[2962]: E0420 20:58:50.362695 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 20 20:58:51.454001 kubelet[2962]: E0420 20:58:51.451760 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:51.564937 containerd[1648]: time="2026-04-20T20:58:51.555537578Z" level=info msg="container event discarded" container=7f9cd6ab01fe9136e0499002ce6ffc091ffa23a0a6c54d96fd9f8e91e0323f67 type=CONTAINER_DELETED_EVENT Apr 20 20:58:52.409561 kubelet[2962]: E0420 20:58:52.408787 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:52.412038 kubelet[2962]: E0420 20:58:52.411805 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:58:59.322649 kubelet[2962]: E0420 20:58:59.319664 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:01.508321 containerd[1648]: time="2026-04-20T20:59:01.498199248Z" level=info msg="container event discarded" container=c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948 type=CONTAINER_STOPPED_EVENT Apr 20 20:59:03.195613 containerd[1648]: time="2026-04-20T20:59:03.194679879Z" level=info msg="container event discarded" container=21ba7315827bad47f3f20a58d7126cf8611c520aeebb8f157eb2df2d01b59af5 type=CONTAINER_DELETED_EVENT Apr 20 20:59:07.568703 systemd[1]: cri-containerd-ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425.scope: Deactivated successfully. Apr 20 20:59:07.569436 systemd[1]: cri-containerd-ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425.scope: Consumed 1min 46.111s CPU time, 55.5M memory peak. Apr 20 20:59:07.720494 containerd[1648]: time="2026-04-20T20:59:07.720121371Z" level=info msg="received container exit event container_id:\"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\" id:\"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\" pid:3512 exit_status:1 exited_at:{seconds:1776718747 nanos:604254907}" Apr 20 20:59:09.715299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425-rootfs.mount: Deactivated successfully. Apr 20 20:59:11.335560 kubelet[2962]: I0420 20:59:11.326538 2962 scope.go:122] "RemoveContainer" containerID="8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8" Apr 20 20:59:11.554883 kubelet[2962]: I0420 20:59:11.408315 2962 scope.go:122] "RemoveContainer" containerID="ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" Apr 20 20:59:11.762618 kubelet[2962]: E0420 20:59:11.743614 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:11.882698 containerd[1648]: time="2026-04-20T20:59:11.779168989Z" level=info msg="container event discarded" container=ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425 type=CONTAINER_CREATED_EVENT Apr 20 20:59:11.934531 kubelet[2962]: E0420 20:59:11.897799 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:59:12.185269 containerd[1648]: time="2026-04-20T20:59:12.162835125Z" level=info msg="RemoveContainer for \"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\"" Apr 20 20:59:12.588389 kubelet[2962]: E0420 20:59:12.587626 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.277s" Apr 20 20:59:12.880702 containerd[1648]: time="2026-04-20T20:59:12.874101986Z" level=info msg="RemoveContainer for \"8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8\" returns successfully" Apr 20 20:59:13.381292 systemd[1]: cri-containerd-ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70.scope: Deactivated successfully. Apr 20 20:59:13.404285 systemd[1]: cri-containerd-ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70.scope: Consumed 1min 8.528s CPU time, 24.6M memory peak. Apr 20 20:59:13.452750 containerd[1648]: time="2026-04-20T20:59:13.402957217Z" level=info msg="received container exit event container_id:\"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\" id:\"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\" pid:3556 exit_status:1 exited_at:{seconds:1776718753 nanos:380995082}" Apr 20 20:59:14.800366 kubelet[2962]: E0420 20:59:14.799966 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:15.073580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70-rootfs.mount: Deactivated successfully. Apr 20 20:59:16.244463 kubelet[2962]: I0420 20:59:16.243617 2962 scope.go:122] "RemoveContainer" containerID="ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" Apr 20 20:59:16.246647 kubelet[2962]: E0420 20:59:16.246418 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:16.249706 kubelet[2962]: E0420 20:59:16.248812 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:59:20.676650 kubelet[2962]: I0420 20:59:20.675169 2962 scope.go:122] "RemoveContainer" containerID="ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" Apr 20 20:59:20.676650 kubelet[2962]: E0420 20:59:20.675678 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:20.676650 kubelet[2962]: E0420 20:59:20.676581 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:59:23.874480 containerd[1648]: time="2026-04-20T20:59:23.872914685Z" level=info msg="container event discarded" container=ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70 type=CONTAINER_CREATED_EVENT Apr 20 20:59:24.543421 kubelet[2962]: I0420 20:59:24.542844 2962 scope.go:122] "RemoveContainer" containerID="ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" Apr 20 20:59:24.653201 kubelet[2962]: E0420 20:59:24.560822 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:24.653201 kubelet[2962]: E0420 20:59:24.578907 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:59:26.060045 containerd[1648]: time="2026-04-20T20:59:26.056875858Z" level=info msg="container event discarded" container=ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425 type=CONTAINER_STARTED_EVENT Apr 20 20:59:32.955409 containerd[1648]: time="2026-04-20T20:59:32.953844479Z" level=info msg="container event discarded" container=c68c35ba447ae4d6a25fd293f299c7283247f1224464b82558d952eb7772d948 type=CONTAINER_DELETED_EVENT Apr 20 20:59:33.198778 containerd[1648]: time="2026-04-20T20:59:33.197578972Z" level=info msg="container event discarded" container=ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70 type=CONTAINER_STARTED_EVENT Apr 20 20:59:33.233007 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 20:59:34.852161 systemd-tmpfiles[4549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 20:59:34.852184 systemd-tmpfiles[4549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 20:59:34.852607 systemd-tmpfiles[4549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 20:59:34.948655 systemd-tmpfiles[4549]: ACLs are not supported, ignoring. Apr 20 20:59:34.953480 systemd-tmpfiles[4549]: ACLs are not supported, ignoring. Apr 20 20:59:35.136336 systemd-tmpfiles[4549]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 20:59:35.136360 systemd-tmpfiles[4549]: Skipping /boot Apr 20 20:59:35.293702 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 20:59:35.339116 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 20:59:36.714318 kubelet[2962]: E0420 20:59:36.712062 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.372s" Apr 20 20:59:49.961685 kubelet[2962]: I0420 20:59:49.950040 2962 scope.go:122] "RemoveContainer" containerID="ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" Apr 20 20:59:50.174674 kubelet[2962]: E0420 20:59:50.069773 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:51.465715 containerd[1648]: time="2026-04-20T20:59:51.461702381Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:5" Apr 20 20:59:52.056682 containerd[1648]: time="2026-04-20T20:59:52.050329973Z" level=info msg="Container 603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:59:52.246327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113827258.mount: Deactivated successfully. Apr 20 20:59:52.611766 containerd[1648]: time="2026-04-20T20:59:52.607606011Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:5 returns container id \"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\"" Apr 20 20:59:52.653504 kubelet[2962]: E0420 20:59:52.626837 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.346s" Apr 20 20:59:52.892066 containerd[1648]: time="2026-04-20T20:59:52.879959100Z" level=info msg="StartContainer for \"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\"" Apr 20 20:59:53.098668 containerd[1648]: time="2026-04-20T20:59:53.094653565Z" level=info msg="connecting to shim 603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 20:59:54.657764 kubelet[2962]: E0420 20:59:54.649775 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 20 20:59:54.899943 systemd[1]: Started cri-containerd-603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743.scope - libcontainer container 603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743. Apr 20 20:59:56.447742 kubelet[2962]: E0420 20:59:56.447220 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.161s" Apr 20 20:59:57.676486 containerd[1648]: time="2026-04-20T20:59:57.651556133Z" level=error msg="get state for 603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" error="context deadline exceeded" Apr 20 20:59:57.676486 containerd[1648]: time="2026-04-20T20:59:57.651845352Z" level=warning msg="unknown status" status=0 Apr 20 20:59:57.898904 kubelet[2962]: I0420 20:59:57.895471 2962 scope.go:122] "RemoveContainer" containerID="ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" Apr 20 20:59:58.031831 kubelet[2962]: E0420 20:59:57.935426 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:58.031831 kubelet[2962]: E0420 20:59:57.935833 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:59:58.695610 containerd[1648]: time="2026-04-20T20:59:58.693787808Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\" attempt:4" Apr 20 20:59:59.470867 containerd[1648]: time="2026-04-20T20:59:59.462904822Z" level=info msg="Container e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:59:59.931670 containerd[1648]: time="2026-04-20T20:59:59.874013053Z" level=error msg="get state for 603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" error="context deadline exceeded" Apr 20 20:59:59.931670 containerd[1648]: time="2026-04-20T20:59:59.931582542Z" level=warning msg="unknown status" status=0 Apr 20 21:00:00.073836 containerd[1648]: time="2026-04-20T21:00:00.073360375Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" attempt:4 returns container id \"e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1\"" Apr 20 21:00:00.132198 kubelet[2962]: E0420 21:00:00.130905 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:00.144378 containerd[1648]: time="2026-04-20T21:00:00.142235950Z" level=info msg="StartContainer for \"e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1\"" Apr 20 21:00:00.264408 containerd[1648]: time="2026-04-20T21:00:00.210665145Z" level=info msg="connecting to shim e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 21:00:02.153396 containerd[1648]: time="2026-04-20T21:00:02.148469648Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 21:00:02.260730 containerd[1648]: time="2026-04-20T21:00:02.163011505Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 21:00:02.593844 kubelet[2962]: E0420 21:00:02.588604 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.28s" Apr 20 21:00:02.938379 systemd[1]: Started cri-containerd-e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1.scope - libcontainer container e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1. Apr 20 21:00:04.290776 containerd[1648]: time="2026-04-20T21:00:04.288595173Z" level=info msg="StartContainer for \"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\" returns successfully" Apr 20 21:00:05.775740 containerd[1648]: time="2026-04-20T21:00:05.774816685Z" level=error msg="get state for e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" error="context deadline exceeded" Apr 20 21:00:05.775740 containerd[1648]: time="2026-04-20T21:00:05.775633041Z" level=warning msg="unknown status" status=0 Apr 20 21:00:06.310105 kubelet[2962]: E0420 21:00:06.296918 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:08.350998 containerd[1648]: time="2026-04-20T21:00:08.344693684Z" level=error msg="get state for e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" error="context deadline exceeded" Apr 20 21:00:08.350998 containerd[1648]: time="2026-04-20T21:00:08.345714977Z" level=warning msg="unknown status" status=0 Apr 20 21:00:08.570776 kubelet[2962]: E0420 21:00:08.533076 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:09.567778 kubelet[2962]: E0420 21:00:09.566545 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:09.588968 kubelet[2962]: E0420 21:00:09.587678 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:10.798798 containerd[1648]: time="2026-04-20T21:00:10.797705834Z" level=error msg="get state for e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" error="context deadline exceeded" Apr 20 21:00:10.880610 containerd[1648]: time="2026-04-20T21:00:10.827717063Z" level=warning msg="unknown status" status=0 Apr 20 21:00:12.452822 containerd[1648]: time="2026-04-20T21:00:12.441562321Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 21:00:12.553789 containerd[1648]: time="2026-04-20T21:00:12.492612779Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 21:00:12.553789 containerd[1648]: time="2026-04-20T21:00:12.492869745Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 21:00:13.061841 kubelet[2962]: E0420 21:00:13.059654 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:13.867719 containerd[1648]: time="2026-04-20T21:00:13.866543175Z" level=info msg="StartContainer for \"e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1\" returns successfully" Apr 20 21:00:14.099532 kubelet[2962]: E0420 21:00:14.097816 2962 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 21:00:14.774803 kubelet[2962]: E0420 21:00:14.772551 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:15.864833 kubelet[2962]: E0420 21:00:15.864559 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:21.740526 kubelet[2962]: E0420 21:00:21.737936 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:24.733999 kubelet[2962]: E0420 21:00:24.733689 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:30.994661 kubelet[2962]: E0420 21:00:30.993566 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:00:34.317177 kubelet[2962]: E0420 21:00:34.313887 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 20 21:00:38.697571 kubelet[2962]: E0420 21:00:38.683581 2962 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 21:00:40.979608 kubelet[2962]: E0420 21:00:40.970718 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.682s" Apr 20 21:00:42.765918 kubelet[2962]: E0420 21:00:42.765611 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.477s" Apr 20 21:00:48.781494 kubelet[2962]: E0420 21:00:48.779878 2962 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 21:00:52.257987 kubelet[2962]: E0420 21:00:52.256960 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:01.955232 kubelet[2962]: E0420 21:01:01.954197 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:03.133600 kubelet[2962]: E0420 21:01:03.131642 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:10.287756 kubelet[2962]: E0420 21:01:10.286839 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:19.383204 kubelet[2962]: E0420 21:01:19.376185 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:28.278622 kubelet[2962]: E0420 21:01:28.278447 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:37.282712 kubelet[2962]: E0420 21:01:37.281373 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:38.291321 kubelet[2962]: E0420 21:01:38.291157 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:01:39.887761 systemd[1]: Started sshd@5-4099-10.0.0.6:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Apr 20 21:01:40.611905 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:01:40.705770 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:01:40.807890 systemd-logind[1620]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 20 21:01:40.881797 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 20 21:01:46.082666 containerd[1648]: time="2026-04-20T21:01:46.080459791Z" level=info msg="container event discarded" container=f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c type=CONTAINER_CREATED_EVENT Apr 20 21:01:46.082666 containerd[1648]: time="2026-04-20T21:01:46.086609771Z" level=info msg="container event discarded" container=f15749546948b768fe0433ce347a9399d0fc35375044a1ef7a092742e106d47c type=CONTAINER_STARTED_EVENT Apr 20 21:01:47.165107 sshd[5015]: Connection closed by 10.0.0.1 port 49962 Apr 20 21:01:47.164282 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Apr 20 21:01:47.282299 systemd[1]: sshd@5-4099-10.0.0.6:22-10.0.0.1:49962.service: Deactivated successfully. Apr 20 21:01:47.453651 systemd[1]: session-7.scope: Deactivated successfully. Apr 20 21:01:47.466325 systemd[1]: session-7.scope: Consumed 3.809s CPU time, 15.9M memory peak. Apr 20 21:01:47.497920 systemd-logind[1620]: Session 7 logged out. Waiting for processes to exit. Apr 20 21:01:47.592476 systemd-logind[1620]: Removed session 7. Apr 20 21:01:52.401231 systemd[1]: Started sshd@6-12289-10.0.0.6:22-10.0.0.1:60336.service - OpenSSH per-connection server daemon (10.0.0.1:60336). Apr 20 21:01:53.157009 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 60336 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:01:53.173808 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:01:53.363766 systemd-logind[1620]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 20 21:01:53.441948 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 20 21:01:55.181508 containerd[1648]: time="2026-04-20T21:01:55.178522966Z" level=info msg="container event discarded" container=1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e type=CONTAINER_CREATED_EVENT Apr 20 21:01:55.693177 sshd[5091]: Connection closed by 10.0.0.1 port 60336 Apr 20 21:01:55.706357 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Apr 20 21:01:55.856543 systemd[1]: sshd@6-12289-10.0.0.6:22-10.0.0.1:60336.service: Deactivated successfully. Apr 20 21:01:56.048720 systemd[1]: session-8.scope: Deactivated successfully. Apr 20 21:01:56.070960 systemd[1]: session-8.scope: Consumed 1.476s CPU time, 14.3M memory peak. Apr 20 21:01:56.215683 containerd[1648]: time="2026-04-20T21:01:56.167558786Z" level=info msg="container event discarded" container=1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e type=CONTAINER_STARTED_EVENT Apr 20 21:01:56.209840 systemd-logind[1620]: Session 8 logged out. Waiting for processes to exit. Apr 20 21:01:56.238049 systemd-logind[1620]: Removed session 8. Apr 20 21:01:56.725096 containerd[1648]: time="2026-04-20T21:01:56.724666947Z" level=info msg="container event discarded" container=1c3c2729541386623619476f7a37a059b421e2f65a517192b6b9e05944daa86e type=CONTAINER_STOPPED_EVENT Apr 20 21:02:01.113236 systemd[1]: Started sshd@7-8194-10.0.0.6:22-10.0.0.1:33348.service - OpenSSH per-connection server daemon (10.0.0.1:33348). Apr 20 21:02:01.926431 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 33348 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:01.930554 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:01.948630 systemd-logind[1620]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:01.954005 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 20 21:02:04.329695 sshd[5143]: Connection closed by 10.0.0.1 port 33348 Apr 20 21:02:04.356110 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:04.528925 systemd[1]: sshd@7-8194-10.0.0.6:22-10.0.0.1:33348.service: Deactivated successfully. Apr 20 21:02:04.551245 systemd[1]: session-9.scope: Deactivated successfully. Apr 20 21:02:04.565908 systemd[1]: session-9.scope: Consumed 1.481s CPU time, 15.9M memory peak. Apr 20 21:02:04.630278 systemd-logind[1620]: Session 9 logged out. Waiting for processes to exit. Apr 20 21:02:04.689074 systemd-logind[1620]: Removed session 9. Apr 20 21:02:09.301227 kubelet[2962]: E0420 21:02:09.297003 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:09.501683 systemd[1]: Started sshd@8-4100-10.0.0.6:22-10.0.0.1:50230.service - OpenSSH per-connection server daemon (10.0.0.1:50230). Apr 20 21:02:10.103679 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 50230 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:10.148762 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:10.327092 systemd-logind[1620]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:10.335613 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 20 21:02:11.712951 sshd[5188]: Connection closed by 10.0.0.1 port 50230 Apr 20 21:02:11.717497 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:11.755848 systemd[1]: sshd@8-4100-10.0.0.6:22-10.0.0.1:50230.service: Deactivated successfully. Apr 20 21:02:11.757576 systemd[1]: session-10.scope: Deactivated successfully. Apr 20 21:02:11.773468 systemd-logind[1620]: Session 10 logged out. Waiting for processes to exit. Apr 20 21:02:11.781417 systemd[1]: Started sshd@9-12290-10.0.0.6:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). Apr 20 21:02:11.785873 systemd-logind[1620]: Removed session 10. Apr 20 21:02:12.180578 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:12.184991 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:12.199878 systemd-logind[1620]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:12.259869 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 20 21:02:12.278114 kubelet[2962]: E0420 21:02:12.277761 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:14.136622 sshd[5222]: Connection closed by 10.0.0.1 port 50242 Apr 20 21:02:14.138514 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:14.179050 systemd[1]: sshd@9-12290-10.0.0.6:22-10.0.0.1:50242.service: Deactivated successfully. Apr 20 21:02:14.288812 systemd[1]: session-11.scope: Deactivated successfully. Apr 20 21:02:14.289867 systemd[1]: session-11.scope: Consumed 1.228s CPU time, 25.3M memory peak. Apr 20 21:02:14.292944 systemd-logind[1620]: Session 11 logged out. Waiting for processes to exit. Apr 20 21:02:14.364521 systemd[1]: Started sshd@10-8195-10.0.0.6:22-10.0.0.1:50258.service - OpenSSH per-connection server daemon (10.0.0.1:50258). Apr 20 21:02:14.381610 systemd-logind[1620]: Removed session 11. Apr 20 21:02:14.689454 containerd[1648]: time="2026-04-20T21:02:14.688919451Z" level=info msg="container event discarded" container=222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71 type=CONTAINER_CREATED_EVENT Apr 20 21:02:15.164009 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 50258 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:15.192100 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:15.459692 systemd-logind[1620]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:15.570031 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 20 21:02:16.213200 containerd[1648]: time="2026-04-20T21:02:16.211470424Z" level=info msg="container event discarded" container=222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71 type=CONTAINER_STARTED_EVENT Apr 20 21:02:17.072921 containerd[1648]: time="2026-04-20T21:02:17.068907335Z" level=info msg="container event discarded" container=222e6596c387fe601783ae57f1ffc8ebc0985e2b630384c9abf98b63a1c95f71 type=CONTAINER_STOPPED_EVENT Apr 20 21:02:17.269807 sshd[5242]: Connection closed by 10.0.0.1 port 50258 Apr 20 21:02:17.274661 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:17.323888 systemd[1]: sshd@10-8195-10.0.0.6:22-10.0.0.1:50258.service: Deactivated successfully. Apr 20 21:02:17.359629 systemd[1]: session-12.scope: Deactivated successfully. Apr 20 21:02:17.360095 systemd[1]: session-12.scope: Consumed 1.007s CPU time, 15.4M memory peak. Apr 20 21:02:17.361549 systemd-logind[1620]: Session 12 logged out. Waiting for processes to exit. Apr 20 21:02:17.366259 systemd-logind[1620]: Removed session 12. Apr 20 21:02:17.991385 containerd[1648]: time="2026-04-20T21:02:17.991012773Z" level=info msg="container event discarded" container=cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097 type=CONTAINER_CREATED_EVENT Apr 20 21:02:19.322113 kubelet[2962]: E0420 21:02:19.321247 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:19.429490 containerd[1648]: time="2026-04-20T21:02:19.425213823Z" level=info msg="container event discarded" container=cfe6a3ef599aceacb0780dab93f3f2313dabf93d7750567bb42a30ec4b234097 type=CONTAINER_STARTED_EVENT Apr 20 21:02:22.763735 systemd[1]: Started sshd@11-3-10.0.0.6:22-10.0.0.1:47994.service - OpenSSH per-connection server daemon (10.0.0.1:47994). Apr 20 21:02:23.378040 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 47994 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:23.382204 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:23.484859 systemd-logind[1620]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:23.495271 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 20 21:02:25.724902 sshd[5300]: Connection closed by 10.0.0.1 port 47994 Apr 20 21:02:25.726731 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:25.860552 systemd[1]: sshd@11-3-10.0.0.6:22-10.0.0.1:47994.service: Deactivated successfully. Apr 20 21:02:26.147696 systemd[1]: session-13.scope: Deactivated successfully. Apr 20 21:02:26.194660 systemd[1]: session-13.scope: Consumed 1.684s CPU time, 16.7M memory peak. Apr 20 21:02:26.279108 systemd-logind[1620]: Session 13 logged out. Waiting for processes to exit. Apr 20 21:02:26.327033 systemd-logind[1620]: Removed session 13. Apr 20 21:02:28.139570 containerd[1648]: time="2026-04-20T21:02:28.135741784Z" level=info msg="container event discarded" container=0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020 type=CONTAINER_CREATED_EVENT Apr 20 21:02:28.139570 containerd[1648]: time="2026-04-20T21:02:28.139227172Z" level=info msg="container event discarded" container=0cfd6fc310894ce15e8ee8efa57cdb5662689d20c429eff461493c6b57c21020 type=CONTAINER_STARTED_EVENT Apr 20 21:02:28.182827 containerd[1648]: time="2026-04-20T21:02:28.164968229Z" level=info msg="container event discarded" container=39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208 type=CONTAINER_CREATED_EVENT Apr 20 21:02:28.182827 containerd[1648]: time="2026-04-20T21:02:28.165349362Z" level=info msg="container event discarded" container=39868544295034a876a86922ef937bc09b0499460d2d24025c119882629e7208 type=CONTAINER_STARTED_EVENT Apr 20 21:02:28.989001 containerd[1648]: time="2026-04-20T21:02:28.988504506Z" level=info msg="container event discarded" container=aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc type=CONTAINER_CREATED_EVENT Apr 20 21:02:29.020967 containerd[1648]: time="2026-04-20T21:02:29.019945690Z" level=info msg="container event discarded" container=d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74 type=CONTAINER_CREATED_EVENT Apr 20 21:02:31.228384 systemd[1]: Started sshd@12-4101-10.0.0.6:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). Apr 20 21:02:31.811356 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:31.878434 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:32.076965 systemd-logind[1620]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:32.098652 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 20 21:02:32.766967 containerd[1648]: time="2026-04-20T21:02:32.762982549Z" level=info msg="container event discarded" container=aff3f84edf1a6dba21ac313f9bd1634d979129850243edee88f6a9d118138ccc type=CONTAINER_STARTED_EVENT Apr 20 21:02:32.823974 containerd[1648]: time="2026-04-20T21:02:32.822067416Z" level=info msg="container event discarded" container=d86e372506afd8696f12ab76bf0d1a75ddc6ae2b084b48a5ba4635d05e53ec74 type=CONTAINER_STARTED_EVENT Apr 20 21:02:34.354737 sshd[5346]: Connection closed by 10.0.0.1 port 43328 Apr 20 21:02:34.365918 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:34.425921 systemd[1]: sshd@12-4101-10.0.0.6:22-10.0.0.1:43328.service: Deactivated successfully. Apr 20 21:02:34.447444 systemd[1]: session-14.scope: Deactivated successfully. Apr 20 21:02:34.447743 systemd[1]: session-14.scope: Consumed 1.571s CPU time, 19.2M memory peak. Apr 20 21:02:34.451358 systemd-logind[1620]: Session 14 logged out. Waiting for processes to exit. Apr 20 21:02:34.472968 systemd-logind[1620]: Removed session 14. Apr 20 21:02:39.467837 systemd[1]: Started sshd@13-4-10.0.0.6:22-10.0.0.1:35656.service - OpenSSH per-connection server daemon (10.0.0.1:35656). Apr 20 21:02:40.066025 sshd[5379]: Accepted publickey for core from 10.0.0.1 port 35656 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:40.089837 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:40.220747 systemd-logind[1620]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:40.236050 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 20 21:02:40.280807 kubelet[2962]: E0420 21:02:40.280697 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:42.280364 kubelet[2962]: E0420 21:02:42.280036 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:42.942360 sshd[5397]: Connection closed by 10.0.0.1 port 35656 Apr 20 21:02:42.943886 sshd-session[5379]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:42.994220 systemd[1]: sshd@13-4-10.0.0.6:22-10.0.0.1:35656.service: Deactivated successfully. Apr 20 21:02:43.066218 systemd[1]: session-15.scope: Deactivated successfully. Apr 20 21:02:43.170037 systemd[1]: session-15.scope: Consumed 1.913s CPU time, 16.3M memory peak. Apr 20 21:02:43.203870 systemd-logind[1620]: Session 15 logged out. Waiting for processes to exit. Apr 20 21:02:43.267789 systemd-logind[1620]: Removed session 15. Apr 20 21:02:46.275527 kubelet[2962]: E0420 21:02:46.275315 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:48.143716 systemd[1]: Started sshd@14-8196-10.0.0.6:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Apr 20 21:02:48.356739 kubelet[2962]: E0420 21:02:48.351525 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:02:49.011774 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:49.049557 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:49.210428 systemd-logind[1620]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:49.246519 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 20 21:02:50.534813 sshd[5442]: Connection closed by 10.0.0.1 port 34136 Apr 20 21:02:50.539814 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:50.650457 systemd[1]: sshd@14-8196-10.0.0.6:22-10.0.0.1:34136.service: Deactivated successfully. Apr 20 21:02:50.716618 systemd[1]: session-16.scope: Deactivated successfully. Apr 20 21:02:50.759064 systemd-logind[1620]: Session 16 logged out. Waiting for processes to exit. Apr 20 21:02:50.927416 systemd[1]: Started sshd@15-5-10.0.0.6:22-10.0.0.1:34144.service - OpenSSH per-connection server daemon (10.0.0.1:34144). Apr 20 21:02:50.955814 systemd-logind[1620]: Removed session 16. Apr 20 21:02:51.757449 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 34144 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:51.781623 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:51.977210 systemd-logind[1620]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:52.042188 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 20 21:02:54.762938 sshd[5473]: Connection closed by 10.0.0.1 port 34144 Apr 20 21:02:54.764989 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Apr 20 21:02:54.916373 systemd[1]: sshd@15-5-10.0.0.6:22-10.0.0.1:34144.service: Deactivated successfully. Apr 20 21:02:54.944042 systemd[1]: session-17.scope: Deactivated successfully. Apr 20 21:02:54.948209 systemd[1]: session-17.scope: Consumed 1.817s CPU time, 27.6M memory peak. Apr 20 21:02:54.959804 systemd-logind[1620]: Session 17 logged out. Waiting for processes to exit. Apr 20 21:02:55.079296 systemd[1]: Started sshd@16-6-10.0.0.6:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Apr 20 21:02:55.095249 systemd-logind[1620]: Removed session 17. Apr 20 21:02:55.651180 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:02:55.656534 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:02:55.798610 systemd-logind[1620]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 20 21:02:55.906738 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 20 21:03:10.278363 sshd[5512]: Connection closed by 10.0.0.1 port 34148 Apr 20 21:03:10.298931 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Apr 20 21:03:10.498230 systemd[1]: sshd@16-6-10.0.0.6:22-10.0.0.1:34148.service: Deactivated successfully. Apr 20 21:03:10.535218 systemd[1]: session-18.scope: Deactivated successfully. Apr 20 21:03:10.535604 systemd[1]: session-18.scope: Consumed 5.248s CPU time, 38.8M memory peak. Apr 20 21:03:10.536324 systemd-logind[1620]: Session 18 logged out. Waiting for processes to exit. Apr 20 21:03:10.623350 systemd[1]: Started sshd@17-4102-10.0.0.6:22-10.0.0.1:46810.service - OpenSSH per-connection server daemon (10.0.0.1:46810). Apr 20 21:03:10.651049 systemd-logind[1620]: Removed session 18. Apr 20 21:03:11.357816 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 46810 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:03:11.402593 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:03:11.981373 systemd-logind[1620]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 20 21:03:12.092282 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 20 21:03:12.368848 kubelet[2962]: E0420 21:03:12.368475 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:22.307955 kubelet[2962]: E0420 21:03:22.304498 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:31.746699 sshd[5584]: Connection closed by 10.0.0.1 port 46810 Apr 20 21:03:31.779102 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Apr 20 21:03:32.396464 systemd[1]: sshd@17-4102-10.0.0.6:22-10.0.0.1:46810.service: Deactivated successfully. Apr 20 21:03:32.692115 systemd[1]: session-19.scope: Deactivated successfully. Apr 20 21:03:32.703493 systemd[1]: session-19.scope: Consumed 5.694s CPU time, 27.5M memory peak. Apr 20 21:03:32.804022 systemd-logind[1620]: Session 19 logged out. Waiting for processes to exit. Apr 20 21:03:32.923682 systemd-logind[1620]: Removed session 19. Apr 20 21:03:33.024614 systemd[1]: Started sshd@18-12291-10.0.0.6:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Apr 20 21:03:34.698814 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:03:34.751707 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:03:34.922444 kubelet[2962]: E0420 21:03:34.920860 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:35.207392 systemd-logind[1620]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 20 21:03:35.430967 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 20 21:03:38.547960 kubelet[2962]: E0420 21:03:38.547626 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.266s" Apr 20 21:03:42.358799 sshd[5679]: Connection closed by 10.0.0.1 port 53142 Apr 20 21:03:42.362906 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Apr 20 21:03:42.619755 systemd[1]: sshd@18-12291-10.0.0.6:22-10.0.0.1:53142.service: Deactivated successfully. Apr 20 21:03:42.947661 systemd[1]: session-20.scope: Deactivated successfully. Apr 20 21:03:42.977454 systemd[1]: session-20.scope: Consumed 2.961s CPU time, 15.1M memory peak. Apr 20 21:03:43.144634 systemd-logind[1620]: Session 20 logged out. Waiting for processes to exit. Apr 20 21:03:43.287194 systemd-logind[1620]: Removed session 20. Apr 20 21:03:46.956089 systemd[1]: cri-containerd-e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1.scope: Deactivated successfully. Apr 20 21:03:47.038439 containerd[1648]: time="2026-04-20T21:03:46.960268915Z" level=info msg="received container exit event container_id:\"e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1\" id:\"e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1\" pid:4673 exit_status:1 exited_at:{seconds:1776719026 nanos:929649590}" Apr 20 21:03:47.040224 systemd[1]: cri-containerd-e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1.scope: Consumed 50.835s CPU time, 20.6M memory peak. Apr 20 21:03:47.819374 systemd[1]: Started sshd@19-4103-10.0.0.6:22-10.0.0.1:35758.service - OpenSSH per-connection server daemon (10.0.0.1:35758). Apr 20 21:03:49.464814 kubelet[2962]: E0420 21:03:49.462348 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:50.273401 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 35758 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:03:50.359014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1-rootfs.mount: Deactivated successfully. Apr 20 21:03:50.515167 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:03:50.921467 systemd-logind[1620]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 20 21:03:51.038636 kubelet[2962]: E0420 21:03:51.037572 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.676s" Apr 20 21:03:51.080790 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 20 21:03:52.705461 kubelet[2962]: E0420 21:03:52.694806 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.025s" Apr 20 21:03:53.047452 kubelet[2962]: I0420 21:03:53.044412 2962 scope.go:122] "RemoveContainer" containerID="ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70" Apr 20 21:03:53.163541 kubelet[2962]: I0420 21:03:53.161598 2962 scope.go:122] "RemoveContainer" containerID="e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" Apr 20 21:03:53.200624 kubelet[2962]: E0420 21:03:53.197978 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:53.200624 kubelet[2962]: E0420 21:03:53.198593 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 21:03:53.218445 containerd[1648]: time="2026-04-20T21:03:53.218344757Z" level=info msg="RemoveContainer for \"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\"" Apr 20 21:03:53.411997 containerd[1648]: time="2026-04-20T21:03:53.396739338Z" level=info msg="RemoveContainer for \"ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70\" returns successfully" Apr 20 21:03:54.483501 kubelet[2962]: I0420 21:03:54.482957 2962 scope.go:122] "RemoveContainer" containerID="e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" Apr 20 21:03:54.591970 kubelet[2962]: E0420 21:03:54.541630 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:03:54.591970 kubelet[2962]: E0420 21:03:54.552119 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 21:04:00.036279 sshd[5742]: Connection closed by 10.0.0.1 port 35758 Apr 20 21:04:00.038989 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Apr 20 21:04:00.106116 systemd[1]: sshd@19-4103-10.0.0.6:22-10.0.0.1:35758.service: Deactivated successfully. Apr 20 21:04:00.224641 systemd[1]: session-21.scope: Deactivated successfully. Apr 20 21:04:00.227636 systemd[1]: session-21.scope: Consumed 4.016s CPU time, 16.2M memory peak. Apr 20 21:04:00.237481 systemd-logind[1620]: Session 21 logged out. Waiting for processes to exit. Apr 20 21:04:00.377512 systemd-logind[1620]: Removed session 21. Apr 20 21:04:02.338800 kubelet[2962]: E0420 21:04:02.337693 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:03.360488 kubelet[2962]: E0420 21:04:03.349089 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:04.329700 kubelet[2962]: E0420 21:04:04.329111 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:05.650573 systemd[1]: Started sshd@20-7-10.0.0.6:22-10.0.0.1:44928.service - OpenSSH per-connection server daemon (10.0.0.1:44928). Apr 20 21:04:07.108619 kubelet[2962]: E0420 21:04:07.102865 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.791s" Apr 20 21:04:07.298694 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:04:07.661985 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:04:08.261741 systemd-logind[1620]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 20 21:04:08.308924 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 20 21:04:08.569558 kubelet[2962]: E0420 21:04:08.560591 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.221s" Apr 20 21:04:10.144009 containerd[1648]: time="2026-04-20T21:04:10.139705101Z" level=info msg="container event discarded" container=ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425 type=CONTAINER_STOPPED_EVENT Apr 20 21:04:12.891480 containerd[1648]: time="2026-04-20T21:04:12.890488518Z" level=info msg="container event discarded" container=8c333711b088fef3a0009877d0e8be006a4a528e822e759eeb64c07223a1d9b8 type=CONTAINER_DELETED_EVENT Apr 20 21:04:15.274891 containerd[1648]: time="2026-04-20T21:04:15.267370786Z" level=info msg="container event discarded" container=ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70 type=CONTAINER_STOPPED_EVENT Apr 20 21:04:15.389670 kubelet[2962]: E0420 21:04:15.385606 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:15.794617 systemd[1]: cri-containerd-603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743.scope: Deactivated successfully. Apr 20 21:04:15.795609 systemd[1]: cri-containerd-603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743.scope: Consumed 1min 25.276s CPU time, 55M memory peak, 36K read from disk. Apr 20 21:04:15.958437 containerd[1648]: time="2026-04-20T21:04:15.862044705Z" level=info msg="received container exit event container_id:\"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\" id:\"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\" pid:4633 exit_status:1 exited_at:{seconds:1776719055 nanos:813747336}" Apr 20 21:04:16.407917 sshd[5821]: Connection closed by 10.0.0.1 port 44928 Apr 20 21:04:16.507721 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Apr 20 21:04:16.729080 systemd[1]: sshd@20-7-10.0.0.6:22-10.0.0.1:44928.service: Deactivated successfully. Apr 20 21:04:16.890989 systemd[1]: session-22.scope: Deactivated successfully. Apr 20 21:04:16.920838 systemd[1]: session-22.scope: Consumed 4.537s CPU time, 16.1M memory peak. Apr 20 21:04:16.972503 systemd-logind[1620]: Session 22 logged out. Waiting for processes to exit. Apr 20 21:04:17.122049 systemd-logind[1620]: Removed session 22. Apr 20 21:04:17.418972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743-rootfs.mount: Deactivated successfully. Apr 20 21:04:17.934633 kubelet[2962]: I0420 21:04:17.931686 2962 scope.go:122] "RemoveContainer" containerID="ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425" Apr 20 21:04:18.023642 kubelet[2962]: I0420 21:04:18.021862 2962 scope.go:122] "RemoveContainer" containerID="603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" Apr 20 21:04:18.032757 kubelet[2962]: E0420 21:04:18.030471 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:18.056677 kubelet[2962]: E0420 21:04:18.039855 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 21:04:18.120284 containerd[1648]: time="2026-04-20T21:04:18.119746849Z" level=info msg="RemoveContainer for \"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\"" Apr 20 21:04:18.405816 containerd[1648]: time="2026-04-20T21:04:18.403484540Z" level=info msg="RemoveContainer for \"ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425\" returns successfully" Apr 20 21:04:20.793527 kubelet[2962]: I0420 21:04:20.792537 2962 scope.go:122] "RemoveContainer" containerID="603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" Apr 20 21:04:20.793527 kubelet[2962]: E0420 21:04:20.797273 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:20.858700 kubelet[2962]: E0420 21:04:20.811805 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 21:04:22.228350 systemd[1]: Started sshd@21-8197-10.0.0.6:22-10.0.0.1:47668.service - OpenSSH per-connection server daemon (10.0.0.1:47668). Apr 20 21:04:25.264969 kubelet[2962]: E0420 21:04:25.262629 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.975s" Apr 20 21:04:25.338385 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 47668 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:04:25.568317 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:04:25.782027 kubelet[2962]: E0420 21:04:25.768079 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:26.188885 systemd-logind[1620]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 20 21:04:26.340037 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 20 21:04:26.553571 kubelet[2962]: E0420 21:04:26.551536 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.212s" Apr 20 21:04:33.822739 sshd[5901]: Connection closed by 10.0.0.1 port 47668 Apr 20 21:04:33.842051 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Apr 20 21:04:34.177302 systemd[1]: sshd@21-8197-10.0.0.6:22-10.0.0.1:47668.service: Deactivated successfully. Apr 20 21:04:34.258456 systemd[1]: sshd@21-8197-10.0.0.6:22-10.0.0.1:47668.service: Consumed 1.295s CPU time, 4.4M memory peak. Apr 20 21:04:34.366958 systemd[1]: session-23.scope: Deactivated successfully. Apr 20 21:04:34.404356 systemd[1]: session-23.scope: Consumed 4.491s CPU time, 16.1M memory peak. Apr 20 21:04:34.476650 systemd-logind[1620]: Session 23 logged out. Waiting for processes to exit. Apr 20 21:04:34.651915 systemd-logind[1620]: Removed session 23. Apr 20 21:04:39.625427 systemd[1]: Started sshd@22-8198-10.0.0.6:22-10.0.0.1:40088.service - OpenSSH per-connection server daemon (10.0.0.1:40088). Apr 20 21:04:40.640404 kubelet[2962]: E0420 21:04:40.638922 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.353s" Apr 20 21:04:41.807811 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 40088 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:04:41.835020 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:04:42.265892 systemd-logind[1620]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 20 21:04:42.467712 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 20 21:04:43.623406 kubelet[2962]: E0420 21:04:43.616839 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:04:47.475362 sshd[5972]: Connection closed by 10.0.0.1 port 40088 Apr 20 21:04:47.479835 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Apr 20 21:04:47.529207 systemd[1]: sshd@22-8198-10.0.0.6:22-10.0.0.1:40088.service: Deactivated successfully. Apr 20 21:04:47.766870 systemd[1]: session-24.scope: Deactivated successfully. Apr 20 21:04:47.768543 systemd[1]: session-24.scope: Consumed 3.363s CPU time, 16M memory peak. Apr 20 21:04:47.842634 systemd-logind[1620]: Session 24 logged out. Waiting for processes to exit. Apr 20 21:04:47.966799 systemd-logind[1620]: Removed session 24. Apr 20 21:04:52.476403 containerd[1648]: time="2026-04-20T21:04:52.474051857Z" level=info msg="container event discarded" container=603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743 type=CONTAINER_CREATED_EVENT Apr 20 21:04:53.171856 systemd[1]: Started sshd@23-8-10.0.0.6:22-10.0.0.1:45062.service - OpenSSH per-connection server daemon (10.0.0.1:45062). Apr 20 21:04:55.588859 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 45062 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:04:55.766505 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:04:56.506221 systemd-logind[1620]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 20 21:04:56.634751 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 20 21:04:59.903538 containerd[1648]: time="2026-04-20T21:04:59.896392916Z" level=info msg="container event discarded" container=e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1 type=CONTAINER_CREATED_EVENT Apr 20 21:05:03.766958 sshd[6042]: Connection closed by 10.0.0.1 port 45062 Apr 20 21:05:03.777095 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Apr 20 21:05:04.025383 systemd[1]: sshd@23-8-10.0.0.6:22-10.0.0.1:45062.service: Deactivated successfully. Apr 20 21:05:04.039814 systemd[1]: sshd@23-8-10.0.0.6:22-10.0.0.1:45062.service: Consumed 1.035s CPU time, 4.1M memory peak. Apr 20 21:05:04.163569 containerd[1648]: time="2026-04-20T21:05:04.163446093Z" level=info msg="container event discarded" container=603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743 type=CONTAINER_STARTED_EVENT Apr 20 21:05:04.163893 systemd[1]: session-25.scope: Deactivated successfully. Apr 20 21:05:04.164242 systemd[1]: session-25.scope: Consumed 4.426s CPU time, 15.9M memory peak. Apr 20 21:05:04.165912 systemd-logind[1620]: Session 25 logged out. Waiting for processes to exit. Apr 20 21:05:04.202822 systemd-logind[1620]: Removed session 25. Apr 20 21:05:08.989046 systemd[1]: Started sshd@24-12292-10.0.0.6:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). Apr 20 21:05:10.977953 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:05:11.033543 sshd-session[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:05:11.645398 systemd-logind[1620]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 20 21:05:11.955345 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 20 21:05:13.699589 containerd[1648]: time="2026-04-20T21:05:13.692244055Z" level=info msg="container event discarded" container=e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1 type=CONTAINER_STARTED_EVENT Apr 20 21:05:16.333732 kubelet[2962]: E0420 21:05:16.332901 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:21.513014 kubelet[2962]: I0420 21:05:21.511429 2962 scope.go:122] "RemoveContainer" containerID="e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1" Apr 20 21:05:21.513014 kubelet[2962]: E0420 21:05:21.512078 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:21.745054 containerd[1648]: time="2026-04-20T21:05:21.739491430Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for container name:\"kube-scheduler\" attempt:5" Apr 20 21:05:22.177286 containerd[1648]: time="2026-04-20T21:05:22.176384058Z" level=info msg="Container 98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066: CDI devices from CRI Config.CDIDevices: []" Apr 20 21:05:22.400862 containerd[1648]: time="2026-04-20T21:05:22.399231242Z" level=info msg="CreateContainer within sandbox \"c96e5b00fc0697c110fb24dcdaa0b47b62e5b149bf28b5d19396495840a65284\" for name:\"kube-scheduler\" attempt:5 returns container id \"98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066\"" Apr 20 21:05:22.495956 containerd[1648]: time="2026-04-20T21:05:22.495773646Z" level=info msg="StartContainer for \"98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066\"" Apr 20 21:05:22.505519 containerd[1648]: time="2026-04-20T21:05:22.503589028Z" level=info msg="connecting to shim 98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066" address="unix:///run/containerd/s/61d64848142b77a3bbfcc5d60ff12803e5d69747435a7b24f6de5ae72a49376f" protocol=ttrpc version=3 Apr 20 21:05:23.045011 sshd[6104]: Connection closed by 10.0.0.1 port 36592 Apr 20 21:05:23.054830 sshd-session[6086]: pam_unix(sshd:session): session closed for user core Apr 20 21:05:23.247329 systemd[1]: sshd@24-12292-10.0.0.6:22-10.0.0.1:36592.service: Deactivated successfully. Apr 20 21:05:23.490081 systemd[1]: session-26.scope: Deactivated successfully. Apr 20 21:05:23.529925 systemd[1]: session-26.scope: Consumed 6.577s CPU time, 17.8M memory peak. Apr 20 21:05:23.683583 systemd-logind[1620]: Session 26 logged out. Waiting for processes to exit. Apr 20 21:05:23.869423 systemd-logind[1620]: Removed session 26. Apr 20 21:05:25.081292 systemd[1]: Started cri-containerd-98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066.scope - libcontainer container 98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066. Apr 20 21:05:27.438805 containerd[1648]: time="2026-04-20T21:05:27.436777023Z" level=error msg="get state for 98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066" error="context deadline exceeded" Apr 20 21:05:27.438805 containerd[1648]: time="2026-04-20T21:05:27.440237791Z" level=warning msg="unknown status" status=0 Apr 20 21:05:27.936305 containerd[1648]: time="2026-04-20T21:05:27.934571544Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 21:05:28.402052 systemd[1]: Started sshd@25-12293-10.0.0.6:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). Apr 20 21:05:28.734278 containerd[1648]: time="2026-04-20T21:05:28.719496087Z" level=info msg="StartContainer for \"98c4d4fcdda2af9f7c553bf5f5291cd96d9b293138338c411daa5d2451868066\" returns successfully" Apr 20 21:05:30.786173 kubelet[2962]: E0420 21:05:30.747005 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.452s" Apr 20 21:05:30.890700 kubelet[2962]: E0420 21:05:30.889902 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:30.890700 kubelet[2962]: E0420 21:05:30.889912 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:31.263980 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:05:31.344947 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:05:31.950576 systemd-logind[1620]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 20 21:05:32.299877 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 20 21:05:32.341066 kubelet[2962]: E0420 21:05:32.320016 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.03s" Apr 20 21:05:32.706046 kubelet[2962]: E0420 21:05:32.691897 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:36.347401 kubelet[2962]: I0420 21:05:36.345488 2962 scope.go:122] "RemoveContainer" containerID="603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" Apr 20 21:05:36.363019 kubelet[2962]: E0420 21:05:36.357434 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:36.445845 containerd[1648]: time="2026-04-20T21:05:36.445517093Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for container name:\"kube-controller-manager\" attempt:6" Apr 20 21:05:36.805558 containerd[1648]: time="2026-04-20T21:05:36.803083735Z" level=info msg="Container 51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c: CDI devices from CRI Config.CDIDevices: []" Apr 20 21:05:37.166403 containerd[1648]: time="2026-04-20T21:05:37.158239477Z" level=info msg="CreateContainer within sandbox \"f9b6b7d24ccd8773e81c37185f210b92cf271465cbfe4f09851166ce3e8a2a4b\" for name:\"kube-controller-manager\" attempt:6 returns container id \"51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c\"" Apr 20 21:05:37.381665 containerd[1648]: time="2026-04-20T21:05:37.379349860Z" level=info msg="StartContainer for \"51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c\"" Apr 20 21:05:37.465114 containerd[1648]: time="2026-04-20T21:05:37.461761411Z" level=info msg="connecting to shim 51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c" address="unix:///run/containerd/s/8b77fcd47a339a13e379c28c84db3ce17f41850650ed4777ce96169d01489760" protocol=ttrpc version=3 Apr 20 21:05:37.654988 kubelet[2962]: E0420 21:05:37.644942 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:38.521046 kubelet[2962]: E0420 21:05:38.519705 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:38.690960 systemd[1]: Started cri-containerd-51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c.scope - libcontainer container 51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c. Apr 20 21:05:40.254861 kubelet[2962]: I0420 21:05:40.251899 2962 scope.go:122] "RemoveContainer" containerID="603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743" Apr 20 21:05:40.588982 containerd[1648]: time="2026-04-20T21:05:40.575429275Z" level=info msg="RemoveContainer for \"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\"" Apr 20 21:05:40.917294 containerd[1648]: time="2026-04-20T21:05:40.900769889Z" level=error msg="get state for 51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c" error="context deadline exceeded" Apr 20 21:05:40.917294 containerd[1648]: time="2026-04-20T21:05:40.900966039Z" level=warning msg="unknown status" status=0 Apr 20 21:05:41.190089 sshd[6224]: Connection closed by 10.0.0.1 port 39016 Apr 20 21:05:41.194733 sshd-session[6194]: pam_unix(sshd:session): session closed for user core Apr 20 21:05:41.198385 containerd[1648]: time="2026-04-20T21:05:41.197858867Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 21:05:41.509916 systemd[1]: sshd@25-12293-10.0.0.6:22-10.0.0.1:39016.service: Deactivated successfully. Apr 20 21:05:41.667051 systemd[1]: session-27.scope: Deactivated successfully. Apr 20 21:05:41.676545 systemd[1]: session-27.scope: Consumed 3.035s CPU time, 17.7M memory peak. Apr 20 21:05:41.786112 systemd-logind[1620]: Session 27 logged out. Waiting for processes to exit. Apr 20 21:05:41.902779 containerd[1648]: time="2026-04-20T21:05:41.793377964Z" level=info msg="RemoveContainer for \"603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743\" returns successfully" Apr 20 21:05:41.905207 systemd-logind[1620]: Removed session 27. Apr 20 21:05:42.320860 kubelet[2962]: E0420 21:05:42.320647 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.04s" Apr 20 21:05:43.033985 containerd[1648]: time="2026-04-20T21:05:42.913876522Z" level=info msg="StartContainer for \"51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c\" returns successfully" Apr 20 21:05:44.556925 kubelet[2962]: E0420 21:05:44.556377 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.16s" Apr 20 21:05:45.151760 kubelet[2962]: E0420 21:05:45.151569 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:46.762475 systemd[1]: Started sshd@26-4104-10.0.0.6:22-10.0.0.1:40558.service - OpenSSH per-connection server daemon (10.0.0.1:40558). Apr 20 21:05:49.088970 sshd[6312]: Accepted publickey for core from 10.0.0.1 port 40558 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:05:49.302594 sshd-session[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:05:50.292024 systemd-logind[1620]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 20 21:05:50.444470 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 20 21:05:51.093969 kubelet[2962]: E0420 21:05:51.069171 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.775s" Apr 20 21:05:52.512538 kubelet[2962]: E0420 21:05:52.511820 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:05:53.538326 kubelet[2962]: E0420 21:05:53.533108 2962 request.go:1196] "Unexpected error when reading response body" err="context deadline exceeded" Apr 20 21:05:53.618332 kubelet[2962]: E0420 21:05:53.541363 2962 controller.go:251] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded" Apr 20 21:05:58.179377 kubelet[2962]: E0420 21:05:58.178518 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:01.580824 kubelet[2962]: E0420 21:06:01.578826 2962 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 21:06:03.069101 kubelet[2962]: E0420 21:06:03.066835 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:03.423806 kubelet[2962]: E0420 21:06:03.412700 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:03.441555 sshd[6328]: Connection closed by 10.0.0.1 port 40558 Apr 20 21:06:03.434089 sshd-session[6312]: pam_unix(sshd:session): session closed for user core Apr 20 21:06:03.636984 systemd[1]: sshd@26-4104-10.0.0.6:22-10.0.0.1:40558.service: Deactivated successfully. Apr 20 21:06:03.643873 systemd[1]: sshd@26-4104-10.0.0.6:22-10.0.0.1:40558.service: Consumed 1.035s CPU time, 4.1M memory peak. Apr 20 21:06:03.916430 systemd[1]: session-28.scope: Deactivated successfully. Apr 20 21:06:03.936956 systemd[1]: session-28.scope: Consumed 4.586s CPU time, 15.7M memory peak. Apr 20 21:06:04.095458 systemd-logind[1620]: Session 28 logged out. Waiting for processes to exit. Apr 20 21:06:04.266660 systemd-logind[1620]: Removed session 28. Apr 20 21:06:09.252096 systemd[1]: Started sshd@27-12294-10.0.0.6:22-10.0.0.1:36972.service - OpenSSH per-connection server daemon (10.0.0.1:36972). Apr 20 21:06:10.619438 kubelet[2962]: E0420 21:06:10.583081 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:12.128050 sshd[6408]: Accepted publickey for core from 10.0.0.1 port 36972 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:06:12.381358 sshd-session[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:06:13.009289 systemd-logind[1620]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 20 21:06:13.122215 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 20 21:06:16.836479 sshd[6423]: Connection closed by 10.0.0.1 port 36972 Apr 20 21:06:16.847560 sshd-session[6408]: pam_unix(sshd:session): session closed for user core Apr 20 21:06:16.987103 systemd[1]: sshd@27-12294-10.0.0.6:22-10.0.0.1:36972.service: Deactivated successfully. Apr 20 21:06:16.988626 systemd[1]: sshd@27-12294-10.0.0.6:22-10.0.0.1:36972.service: Consumed 1.005s CPU time, 4.2M memory peak. Apr 20 21:06:17.142821 systemd[1]: session-29.scope: Deactivated successfully. Apr 20 21:06:17.206299 systemd[1]: session-29.scope: Consumed 2.407s CPU time, 15.9M memory peak. Apr 20 21:06:17.315122 systemd-logind[1620]: Session 29 logged out. Waiting for processes to exit. Apr 20 21:06:17.380241 kubelet[2962]: E0420 21:06:17.355049 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:17.394900 systemd-logind[1620]: Removed session 29. Apr 20 21:06:22.288420 systemd[1]: Started sshd@28-9-10.0.0.6:22-10.0.0.1:37498.service - OpenSSH per-connection server daemon (10.0.0.1:37498). Apr 20 21:06:22.895955 sshd[6468]: Accepted publickey for core from 10.0.0.1 port 37498 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:06:22.906665 sshd-session[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:06:23.362831 systemd-logind[1620]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 20 21:06:23.424698 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 20 21:06:28.484871 sshd[6474]: Connection closed by 10.0.0.1 port 37498 Apr 20 21:06:28.510527 sshd-session[6468]: pam_unix(sshd:session): session closed for user core Apr 20 21:06:28.723329 systemd[1]: sshd@28-9-10.0.0.6:22-10.0.0.1:37498.service: Deactivated successfully. Apr 20 21:06:28.991776 systemd[1]: session-30.scope: Deactivated successfully. Apr 20 21:06:29.092872 systemd[1]: session-30.scope: Consumed 3.159s CPU time, 17.6M memory peak. Apr 20 21:06:29.160952 systemd-logind[1620]: Session 30 logged out. Waiting for processes to exit. Apr 20 21:06:29.298935 systemd-logind[1620]: Removed session 30. Apr 20 21:06:34.073644 systemd[1]: Started sshd@29-12295-10.0.0.6:22-10.0.0.1:46830.service - OpenSSH per-connection server daemon (10.0.0.1:46830). Apr 20 21:06:36.257667 sshd[6527]: Accepted publickey for core from 10.0.0.1 port 46830 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:06:36.335513 sshd-session[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:06:36.739947 systemd-logind[1620]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 20 21:06:36.957540 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 20 21:06:44.461803 sshd[6539]: Connection closed by 10.0.0.1 port 46830 Apr 20 21:06:44.469966 sshd-session[6527]: pam_unix(sshd:session): session closed for user core Apr 20 21:06:44.668711 systemd[1]: sshd@29-12295-10.0.0.6:22-10.0.0.1:46830.service: Deactivated successfully. Apr 20 21:06:44.747993 systemd[1]: session-31.scope: Deactivated successfully. Apr 20 21:06:44.761848 systemd[1]: session-31.scope: Consumed 3.976s CPU time, 16M memory peak. Apr 20 21:06:44.899962 systemd-logind[1620]: Session 31 logged out. Waiting for processes to exit. Apr 20 21:06:44.980091 systemd-logind[1620]: Removed session 31. Apr 20 21:06:49.833669 systemd[1]: Started sshd@30-8199-10.0.0.6:22-10.0.0.1:47256.service - OpenSSH per-connection server daemon (10.0.0.1:47256). Apr 20 21:06:52.717366 sshd[6593]: Accepted publickey for core from 10.0.0.1 port 47256 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:06:52.763096 sshd-session[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:06:53.192682 systemd-logind[1620]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 20 21:06:53.255967 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 20 21:06:53.660328 kubelet[2962]: E0420 21:06:53.647320 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:54.346842 kubelet[2962]: E0420 21:06:54.344293 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:58.299434 kubelet[2962]: E0420 21:06:58.298280 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:06:58.622077 sshd[6599]: Connection closed by 10.0.0.1 port 47256 Apr 20 21:06:58.627443 sshd-session[6593]: pam_unix(sshd:session): session closed for user core Apr 20 21:06:58.874062 systemd[1]: sshd@30-8199-10.0.0.6:22-10.0.0.1:47256.service: Deactivated successfully. Apr 20 21:06:58.936666 systemd[1]: sshd@30-8199-10.0.0.6:22-10.0.0.1:47256.service: Consumed 1.217s CPU time, 4.1M memory peak. Apr 20 21:06:59.125582 systemd[1]: session-32.scope: Deactivated successfully. Apr 20 21:06:59.131187 systemd[1]: session-32.scope: Consumed 2.921s CPU time, 15.7M memory peak. Apr 20 21:06:59.138713 systemd-logind[1620]: Session 32 logged out. Waiting for processes to exit. Apr 20 21:06:59.262842 systemd-logind[1620]: Removed session 32. Apr 20 21:07:04.253500 systemd[1]: Started sshd@31-8200-10.0.0.6:22-10.0.0.1:40526.service - OpenSSH per-connection server daemon (10.0.0.1:40526). Apr 20 21:07:06.165402 sshd[6655]: Accepted publickey for core from 10.0.0.1 port 40526 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:06.278091 sshd-session[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:06.962085 systemd-logind[1620]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:07.068111 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 20 21:07:09.354916 systemd[1]: cri-containerd-51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c.scope: Deactivated successfully. Apr 20 21:07:09.413489 systemd[1]: cri-containerd-51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c.scope: Consumed 21.910s CPU time, 20.1M memory peak. Apr 20 21:07:09.458968 containerd[1648]: time="2026-04-20T21:07:09.362505602Z" level=info msg="received container exit event container_id:\"51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c\" id:\"51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c\" pid:6271 exit_status:1 exited_at:{seconds:1776719229 nanos:349496492}" Apr 20 21:07:11.074718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c-rootfs.mount: Deactivated successfully. Apr 20 21:07:11.105731 sshd[6665]: Connection closed by 10.0.0.1 port 40526 Apr 20 21:07:11.121836 sshd-session[6655]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:11.200810 systemd[1]: sshd@31-8200-10.0.0.6:22-10.0.0.1:40526.service: Deactivated successfully. Apr 20 21:07:11.283712 systemd[1]: session-33.scope: Deactivated successfully. Apr 20 21:07:11.288245 systemd[1]: session-33.scope: Consumed 1.726s CPU time, 15.9M memory peak. Apr 20 21:07:11.400946 systemd-logind[1620]: Session 33 logged out. Waiting for processes to exit. Apr 20 21:07:11.466693 systemd-logind[1620]: Removed session 33. Apr 20 21:07:12.097814 kubelet[2962]: I0420 21:07:12.097483 2962 scope.go:122] "RemoveContainer" containerID="51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c" Apr 20 21:07:12.148677 kubelet[2962]: E0420 21:07:12.121212 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:07:12.148677 kubelet[2962]: E0420 21:07:12.132069 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 21:07:16.196617 systemd[1]: Started sshd@32-10-10.0.0.6:22-10.0.0.1:59418.service - OpenSSH per-connection server daemon (10.0.0.1:59418). Apr 20 21:07:16.477033 sshd[6725]: Accepted publickey for core from 10.0.0.1 port 59418 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:16.484034 sshd-session[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:16.638532 systemd-logind[1620]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:16.658949 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 20 21:07:17.174819 sshd[6729]: Connection closed by 10.0.0.1 port 59418 Apr 20 21:07:17.175728 sshd-session[6725]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:17.186868 systemd[1]: sshd@32-10-10.0.0.6:22-10.0.0.1:59418.service: Deactivated successfully. Apr 20 21:07:17.285475 systemd[1]: session-34.scope: Deactivated successfully. Apr 20 21:07:17.303621 kubelet[2962]: E0420 21:07:17.302971 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:07:17.337944 systemd-logind[1620]: Session 34 logged out. Waiting for processes to exit. Apr 20 21:07:17.341451 systemd-logind[1620]: Removed session 34. Apr 20 21:07:20.664270 kubelet[2962]: I0420 21:07:20.663834 2962 scope.go:122] "RemoveContainer" containerID="51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c" Apr 20 21:07:20.664270 kubelet[2962]: E0420 21:07:20.664120 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:07:20.664270 kubelet[2962]: E0420 21:07:20.664355 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 21:07:22.237828 systemd[1]: Started sshd@33-12296-10.0.0.6:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Apr 20 21:07:22.367360 sshd[6763]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:22.375861 sshd-session[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:22.388393 systemd-logind[1620]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:22.405114 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 20 21:07:22.637371 sshd[6767]: Connection closed by 10.0.0.1 port 59426 Apr 20 21:07:22.639586 sshd-session[6763]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:22.663476 systemd[1]: sshd@33-12296-10.0.0.6:22-10.0.0.1:59426.service: Deactivated successfully. Apr 20 21:07:22.688462 systemd[1]: session-35.scope: Deactivated successfully. Apr 20 21:07:22.765071 systemd-logind[1620]: Session 35 logged out. Waiting for processes to exit. Apr 20 21:07:22.773597 systemd-logind[1620]: Removed session 35. Apr 20 21:07:27.701937 systemd[1]: Started sshd@34-4105-10.0.0.6:22-10.0.0.1:34318.service - OpenSSH per-connection server daemon (10.0.0.1:34318). Apr 20 21:07:27.841639 sshd[6810]: Accepted publickey for core from 10.0.0.1 port 34318 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:27.844388 sshd-session[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:27.850260 systemd-logind[1620]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:27.861623 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 20 21:07:28.296295 sshd[6814]: Connection closed by 10.0.0.1 port 34318 Apr 20 21:07:28.297700 sshd-session[6810]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:28.331561 systemd[1]: sshd@34-4105-10.0.0.6:22-10.0.0.1:34318.service: Deactivated successfully. Apr 20 21:07:28.344345 systemd[1]: session-36.scope: Deactivated successfully. Apr 20 21:07:28.349890 systemd-logind[1620]: Session 36 logged out. Waiting for processes to exit. Apr 20 21:07:28.355786 systemd-logind[1620]: Removed session 36. Apr 20 21:07:30.275412 kubelet[2962]: E0420 21:07:30.275075 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:07:33.278282 kubelet[2962]: E0420 21:07:33.277993 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:07:33.325963 systemd[1]: Started sshd@35-11-10.0.0.6:22-10.0.0.1:34332.service - OpenSSH per-connection server daemon (10.0.0.1:34332). Apr 20 21:07:33.582816 sshd[6847]: Accepted publickey for core from 10.0.0.1 port 34332 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:33.587004 sshd-session[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:33.625050 systemd-logind[1620]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:33.639561 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 20 21:07:33.771806 sshd[6851]: Connection closed by 10.0.0.1 port 34332 Apr 20 21:07:33.773837 sshd-session[6847]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:33.777197 systemd[1]: sshd@35-11-10.0.0.6:22-10.0.0.1:34332.service: Deactivated successfully. Apr 20 21:07:33.779185 systemd[1]: session-37.scope: Deactivated successfully. Apr 20 21:07:33.783374 systemd-logind[1620]: Session 37 logged out. Waiting for processes to exit. Apr 20 21:07:33.785242 systemd-logind[1620]: Removed session 37. Apr 20 21:07:38.815166 systemd[1]: Started sshd@36-12-10.0.0.6:22-10.0.0.1:54782.service - OpenSSH per-connection server daemon (10.0.0.1:54782). Apr 20 21:07:39.080280 sshd[6884]: Accepted publickey for core from 10.0.0.1 port 54782 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:39.082506 sshd-session[6884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:39.100253 systemd-logind[1620]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:39.111274 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 20 21:07:39.365573 sshd[6888]: Connection closed by 10.0.0.1 port 54782 Apr 20 21:07:39.366571 sshd-session[6884]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:39.373346 systemd[1]: sshd@36-12-10.0.0.6:22-10.0.0.1:54782.service: Deactivated successfully. Apr 20 21:07:39.376625 systemd[1]: session-38.scope: Deactivated successfully. Apr 20 21:07:39.377395 systemd-logind[1620]: Session 38 logged out. Waiting for processes to exit. Apr 20 21:07:39.381378 systemd-logind[1620]: Removed session 38. Apr 20 21:07:44.398077 systemd[1]: Started sshd@37-4106-10.0.0.6:22-10.0.0.1:54790.service - OpenSSH per-connection server daemon (10.0.0.1:54790). Apr 20 21:07:44.465800 sshd[6921]: Accepted publickey for core from 10.0.0.1 port 54790 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:44.471624 sshd-session[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:44.482988 systemd-logind[1620]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:44.492882 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 20 21:07:44.659046 sshd[6925]: Connection closed by 10.0.0.1 port 54790 Apr 20 21:07:44.659283 sshd-session[6921]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:44.662405 systemd[1]: sshd@37-4106-10.0.0.6:22-10.0.0.1:54790.service: Deactivated successfully. Apr 20 21:07:44.664234 systemd[1]: session-39.scope: Deactivated successfully. Apr 20 21:07:44.664996 systemd-logind[1620]: Session 39 logged out. Waiting for processes to exit. Apr 20 21:07:44.667669 systemd-logind[1620]: Removed session 39. Apr 20 21:07:49.706638 systemd[1]: Started sshd@38-13-10.0.0.6:22-10.0.0.1:55790.service - OpenSSH per-connection server daemon (10.0.0.1:55790). Apr 20 21:07:49.827687 sshd[6959]: Accepted publickey for core from 10.0.0.1 port 55790 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:49.829346 sshd-session[6959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:49.840122 systemd-logind[1620]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:49.848805 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 20 21:07:50.031050 sshd[6963]: Connection closed by 10.0.0.1 port 55790 Apr 20 21:07:50.031972 sshd-session[6959]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:50.037011 systemd[1]: sshd@38-13-10.0.0.6:22-10.0.0.1:55790.service: Deactivated successfully. Apr 20 21:07:50.042114 systemd[1]: session-40.scope: Deactivated successfully. Apr 20 21:07:50.043532 systemd-logind[1620]: Session 40 logged out. Waiting for processes to exit. Apr 20 21:07:50.046735 systemd-logind[1620]: Removed session 40. Apr 20 21:07:55.178034 systemd[1]: Started sshd@39-12297-10.0.0.6:22-10.0.0.1:55798.service - OpenSSH per-connection server daemon (10.0.0.1:55798). Apr 20 21:07:55.306415 sshd[6998]: Accepted publickey for core from 10.0.0.1 port 55798 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:07:55.310922 sshd-session[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:07:55.331460 systemd-logind[1620]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 20 21:07:55.346081 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 20 21:07:55.649579 sshd[7002]: Connection closed by 10.0.0.1 port 55798 Apr 20 21:07:55.650728 sshd-session[6998]: pam_unix(sshd:session): session closed for user core Apr 20 21:07:55.661571 systemd[1]: sshd@39-12297-10.0.0.6:22-10.0.0.1:55798.service: Deactivated successfully. Apr 20 21:07:55.667454 systemd[1]: session-41.scope: Deactivated successfully. Apr 20 21:07:55.669381 systemd-logind[1620]: Session 41 logged out. Waiting for processes to exit. Apr 20 21:07:55.674446 systemd-logind[1620]: Removed session 41. Apr 20 21:08:00.655581 systemd[1]: Started sshd@40-4107-10.0.0.6:22-10.0.0.1:42312.service - OpenSSH per-connection server daemon (10.0.0.1:42312). Apr 20 21:08:00.825627 sshd[7036]: Accepted publickey for core from 10.0.0.1 port 42312 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:00.828576 sshd-session[7036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:00.854755 systemd-logind[1620]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:00.868922 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 20 21:08:01.277691 kubelet[2962]: E0420 21:08:01.277423 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:01.288761 sshd[7040]: Connection closed by 10.0.0.1 port 42312 Apr 20 21:08:01.294775 sshd-session[7036]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:01.306567 systemd[1]: sshd@40-4107-10.0.0.6:22-10.0.0.1:42312.service: Deactivated successfully. Apr 20 21:08:01.319273 systemd[1]: session-42.scope: Deactivated successfully. Apr 20 21:08:01.325453 systemd-logind[1620]: Session 42 logged out. Waiting for processes to exit. Apr 20 21:08:01.326460 systemd-logind[1620]: Removed session 42. Apr 20 21:08:06.330439 systemd[1]: Started sshd@41-8201-10.0.0.6:22-10.0.0.1:33052.service - OpenSSH per-connection server daemon (10.0.0.1:33052). Apr 20 21:08:06.547964 sshd[7073]: Accepted publickey for core from 10.0.0.1 port 33052 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:06.550520 sshd-session[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:06.556452 systemd-logind[1620]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:06.588749 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 20 21:08:08.657281 sshd[7077]: Connection closed by 10.0.0.1 port 33052 Apr 20 21:08:08.661988 sshd-session[7073]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:08.734617 systemd[1]: sshd@41-8201-10.0.0.6:22-10.0.0.1:33052.service: Deactivated successfully. Apr 20 21:08:08.879092 systemd[1]: session-43.scope: Deactivated successfully. Apr 20 21:08:08.880346 systemd[1]: session-43.scope: Consumed 1.580s CPU time, 16.2M memory peak. Apr 20 21:08:08.887051 systemd-logind[1620]: Session 43 logged out. Waiting for processes to exit. Apr 20 21:08:08.892803 systemd-logind[1620]: Removed session 43. Apr 20 21:08:13.895516 systemd[1]: Started sshd@42-12298-10.0.0.6:22-10.0.0.1:33056.service - OpenSSH per-connection server daemon (10.0.0.1:33056). Apr 20 21:08:14.208762 sshd[7131]: Accepted publickey for core from 10.0.0.1 port 33056 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:14.210433 sshd-session[7131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:14.249671 systemd-logind[1620]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:14.300984 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 20 21:08:14.383768 kubelet[2962]: E0420 21:08:14.379877 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:15.290755 kubelet[2962]: E0420 21:08:15.290595 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:17.097892 sshd[7135]: Connection closed by 10.0.0.1 port 33056 Apr 20 21:08:17.104589 sshd-session[7131]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:17.353329 systemd[1]: sshd@42-12298-10.0.0.6:22-10.0.0.1:33056.service: Deactivated successfully. Apr 20 21:08:17.462093 systemd[1]: session-44.scope: Deactivated successfully. Apr 20 21:08:17.469350 systemd[1]: session-44.scope: Consumed 1.934s CPU time, 16.1M memory peak. Apr 20 21:08:17.500742 systemd-logind[1620]: Session 44 logged out. Waiting for processes to exit. Apr 20 21:08:17.559116 systemd-logind[1620]: Removed session 44. Apr 20 21:08:22.189939 systemd[1]: Started sshd@43-14-10.0.0.6:22-10.0.0.1:56958.service - OpenSSH per-connection server daemon (10.0.0.1:56958). Apr 20 21:08:22.466877 sshd[7168]: Accepted publickey for core from 10.0.0.1 port 56958 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:22.488710 sshd-session[7168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:22.532847 systemd-logind[1620]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:22.545750 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 20 21:08:23.087519 sshd[7173]: Connection closed by 10.0.0.1 port 56958 Apr 20 21:08:23.091423 sshd-session[7168]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:23.185230 systemd[1]: sshd@43-14-10.0.0.6:22-10.0.0.1:56958.service: Deactivated successfully. Apr 20 21:08:23.200537 systemd[1]: session-45.scope: Deactivated successfully. Apr 20 21:08:23.232182 systemd-logind[1620]: Session 45 logged out. Waiting for processes to exit. Apr 20 21:08:23.246532 systemd-logind[1620]: Removed session 45. Apr 20 21:08:28.453884 systemd[1]: Started sshd@44-12299-10.0.0.6:22-10.0.0.1:46342.service - OpenSSH per-connection server daemon (10.0.0.1:46342). Apr 20 21:08:28.741874 sshd[7209]: Accepted publickey for core from 10.0.0.1 port 46342 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:28.777398 sshd-session[7209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:28.922017 systemd-logind[1620]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:28.946918 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 20 21:08:30.073919 sshd[7233]: Connection closed by 10.0.0.1 port 46342 Apr 20 21:08:30.075884 sshd-session[7209]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:30.173518 systemd[1]: sshd@44-12299-10.0.0.6:22-10.0.0.1:46342.service: Deactivated successfully. Apr 20 21:08:30.193024 systemd[1]: session-46.scope: Deactivated successfully. Apr 20 21:08:30.195682 systemd[1]: session-46.scope: Consumed 1.065s CPU time, 18.1M memory peak. Apr 20 21:08:30.239410 systemd-logind[1620]: Session 46 logged out. Waiting for processes to exit. Apr 20 21:08:30.240905 systemd-logind[1620]: Removed session 46. Apr 20 21:08:30.281710 kubelet[2962]: E0420 21:08:30.281358 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:35.195746 systemd[1]: Started sshd@45-15-10.0.0.6:22-10.0.0.1:46388.service - OpenSSH per-connection server daemon (10.0.0.1:46388). Apr 20 21:08:35.292428 kubelet[2962]: E0420 21:08:35.292330 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:35.599538 sshd[7267]: Accepted publickey for core from 10.0.0.1 port 46388 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:35.613308 sshd-session[7267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:35.649654 systemd-logind[1620]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:35.705729 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 20 21:08:37.174354 sshd[7271]: Connection closed by 10.0.0.1 port 46388 Apr 20 21:08:37.178097 sshd-session[7267]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:37.275927 systemd[1]: sshd@45-15-10.0.0.6:22-10.0.0.1:46388.service: Deactivated successfully. Apr 20 21:08:37.304507 systemd[1]: session-47.scope: Deactivated successfully. Apr 20 21:08:37.305181 systemd[1]: session-47.scope: Consumed 1.231s CPU time, 15.9M memory peak. Apr 20 21:08:37.307156 systemd-logind[1620]: Session 47 logged out. Waiting for processes to exit. Apr 20 21:08:37.315521 systemd-logind[1620]: Removed session 47. Apr 20 21:08:39.299982 kubelet[2962]: I0420 21:08:39.298270 2962 scope.go:122] "RemoveContainer" containerID="51377bdc86358974add5e266e89c3b2fbaeed0bcb73cbd3c56fbd71038847b2c" Apr 20 21:08:39.362124 kubelet[2962]: E0420 21:08:39.302980 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:39.362124 kubelet[2962]: E0420 21:08:39.307128 2962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 21:08:42.526816 systemd[1]: Started sshd@46-12300-10.0.0.6:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Apr 20 21:08:43.233758 sshd[7304]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:43.249382 sshd-session[7304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:43.515834 systemd-logind[1620]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:43.680763 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 20 21:08:44.201250 sshd[7313]: Connection closed by 10.0.0.1 port 43500 Apr 20 21:08:44.202221 sshd-session[7304]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:44.205666 systemd[1]: sshd@46-12300-10.0.0.6:22-10.0.0.1:43500.service: Deactivated successfully. Apr 20 21:08:44.213028 systemd[1]: session-48.scope: Deactivated successfully. Apr 20 21:08:44.215274 systemd-logind[1620]: Session 48 logged out. Waiting for processes to exit. Apr 20 21:08:44.216389 systemd-logind[1620]: Removed session 48. Apr 20 21:08:44.274973 kubelet[2962]: E0420 21:08:44.274704 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:08:49.401895 systemd[1]: Started sshd@47-16-10.0.0.6:22-10.0.0.1:43442.service - OpenSSH per-connection server daemon (10.0.0.1:43442). Apr 20 21:08:50.367339 containerd[1648]: time="2026-04-20T21:08:50.365707247Z" level=info msg="container event discarded" container=e7fd4f28b5abfd786bae86d00aa5a687eec5dd140b95f14df9f6cdd767547cf1 type=CONTAINER_STOPPED_EVENT Apr 20 21:08:50.386681 sshd[7350]: Accepted publickey for core from 10.0.0.1 port 43442 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:50.405739 sshd-session[7350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:50.704823 systemd-logind[1620]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:50.787951 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 20 21:08:52.902816 sshd[7365]: Connection closed by 10.0.0.1 port 43442 Apr 20 21:08:52.915009 sshd-session[7350]: pam_unix(sshd:session): session closed for user core Apr 20 21:08:53.029030 systemd[1]: sshd@47-16-10.0.0.6:22-10.0.0.1:43442.service: Deactivated successfully. Apr 20 21:08:53.067480 systemd[1]: session-49.scope: Deactivated successfully. Apr 20 21:08:53.076677 systemd[1]: session-49.scope: Consumed 1.433s CPU time, 16.4M memory peak. Apr 20 21:08:53.177934 systemd-logind[1620]: Session 49 logged out. Waiting for processes to exit. Apr 20 21:08:53.191481 systemd-logind[1620]: Removed session 49. Apr 20 21:08:53.410852 containerd[1648]: time="2026-04-20T21:08:53.408998996Z" level=info msg="container event discarded" container=ed01da2a3c67271d8feef89bf4b4e957e8a155019bcada3f2af9e31dc61ccd70 type=CONTAINER_DELETED_EVENT Apr 20 21:08:57.980996 systemd[1]: Started sshd@48-4108-10.0.0.6:22-10.0.0.1:50388.service - OpenSSH per-connection server daemon (10.0.0.1:50388). Apr 20 21:08:59.000800 sshd[7400]: Accepted publickey for core from 10.0.0.1 port 50388 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:08:59.070187 sshd-session[7400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:08:59.413168 systemd-logind[1620]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 20 21:08:59.480295 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 20 21:09:03.362012 sshd[7410]: Connection closed by 10.0.0.1 port 50388 Apr 20 21:09:03.364417 sshd-session[7400]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:03.563882 systemd[1]: sshd@48-4108-10.0.0.6:22-10.0.0.1:50388.service: Deactivated successfully. Apr 20 21:09:04.678420 systemd[1]: session-50.scope: Deactivated successfully. Apr 20 21:09:04.691836 systemd[1]: session-50.scope: Consumed 2.278s CPU time, 20.7M memory peak. Apr 20 21:09:04.883962 systemd-logind[1620]: Session 50 logged out. Waiting for processes to exit. Apr 20 21:09:05.005209 systemd-logind[1620]: Removed session 50. Apr 20 21:09:08.455685 kubelet[2962]: E0420 21:09:08.455441 2962 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.175s" Apr 20 21:09:08.925964 systemd[1]: Started sshd@49-12301-10.0.0.6:22-10.0.0.1:59156.service - OpenSSH per-connection server daemon (10.0.0.1:59156). Apr 20 21:09:09.404483 sshd[7457]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:09:09.477277 sshd-session[7457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:09:09.697088 systemd-logind[1620]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 20 21:09:09.806376 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 20 21:09:11.604685 sshd[7461]: Connection closed by 10.0.0.1 port 59156 Apr 20 21:09:11.610456 sshd-session[7457]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:11.647054 systemd[1]: sshd@49-12301-10.0.0.6:22-10.0.0.1:59156.service: Deactivated successfully. Apr 20 21:09:11.679900 systemd[1]: session-51.scope: Deactivated successfully. Apr 20 21:09:11.684677 systemd[1]: session-51.scope: Consumed 1.216s CPU time, 18.1M memory peak. Apr 20 21:09:11.806577 systemd-logind[1620]: Session 51 logged out. Waiting for processes to exit. Apr 20 21:09:11.844692 systemd-logind[1620]: Removed session 51. Apr 20 21:09:16.704108 systemd[1]: Started sshd@50-4109-10.0.0.6:22-10.0.0.1:51652.service - OpenSSH per-connection server daemon (10.0.0.1:51652). Apr 20 21:09:17.000047 sshd[7500]: Accepted publickey for core from 10.0.0.1 port 51652 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:09:17.014324 sshd-session[7500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:09:17.180393 systemd-logind[1620]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 20 21:09:17.274203 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 20 21:09:17.543181 containerd[1648]: time="2026-04-20T21:09:17.542321155Z" level=info msg="container event discarded" container=603be50f9ab290ac923f501000870bd779b7306ec21604d64659f46e46c72743 type=CONTAINER_STOPPED_EVENT Apr 20 21:09:18.293470 kubelet[2962]: E0420 21:09:18.290884 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:09:18.460632 containerd[1648]: time="2026-04-20T21:09:18.450455886Z" level=info msg="container event discarded" container=ed45505d9ec252769a448b828a1c4b3bec196065280db4c0a978c43a9e86e425 type=CONTAINER_DELETED_EVENT Apr 20 21:09:19.027187 sshd[7504]: Connection closed by 10.0.0.1 port 51652 Apr 20 21:09:19.029577 sshd-session[7500]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:19.123037 systemd[1]: sshd@50-4109-10.0.0.6:22-10.0.0.1:51652.service: Deactivated successfully. Apr 20 21:09:19.143428 systemd[1]: session-52.scope: Deactivated successfully. Apr 20 21:09:19.143795 systemd[1]: session-52.scope: Consumed 1.347s CPU time, 16M memory peak. Apr 20 21:09:19.151289 systemd-logind[1620]: Session 52 logged out. Waiting for processes to exit. Apr 20 21:09:19.159079 systemd-logind[1620]: Removed session 52. Apr 20 21:09:24.271110 systemd[1]: Started sshd@51-12302-10.0.0.6:22-10.0.0.1:51662.service - OpenSSH per-connection server daemon (10.0.0.1:51662). Apr 20 21:09:24.935699 sshd[7540]: Accepted publickey for core from 10.0.0.1 port 51662 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:09:24.942739 sshd-session[7540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:09:25.082389 systemd-logind[1620]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 20 21:09:25.157008 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 20 21:09:26.656441 sshd[7558]: Connection closed by 10.0.0.1 port 51662 Apr 20 21:09:26.658787 sshd-session[7540]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:26.682125 systemd[1]: sshd@51-12302-10.0.0.6:22-10.0.0.1:51662.service: Deactivated successfully. Apr 20 21:09:26.711957 systemd[1]: session-53.scope: Deactivated successfully. Apr 20 21:09:26.714271 systemd[1]: session-53.scope: Consumed 1.138s CPU time, 16M memory peak. Apr 20 21:09:26.751892 systemd-logind[1620]: Session 53 logged out. Waiting for processes to exit. Apr 20 21:09:26.756315 systemd-logind[1620]: Removed session 53. Apr 20 21:09:28.283406 kubelet[2962]: E0420 21:09:28.281882 2962 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 21:09:31.765872 systemd[1]: Started sshd@52-8202-10.0.0.6:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Apr 20 21:09:32.006212 sshd[7599]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:09:32.015886 sshd-session[7599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:09:32.023228 systemd-logind[1620]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 20 21:09:32.047840 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 20 21:09:33.258383 sshd[7603]: Connection closed by 10.0.0.1 port 44290 Apr 20 21:09:33.261559 sshd-session[7599]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:33.437623 systemd[1]: sshd@52-8202-10.0.0.6:22-10.0.0.1:44290.service: Deactivated successfully. Apr 20 21:09:33.566342 systemd[1]: session-54.scope: Deactivated successfully. Apr 20 21:09:33.571331 systemd[1]: session-54.scope: Consumed 1.027s CPU time, 16M memory peak. Apr 20 21:09:33.589920 systemd-logind[1620]: Session 54 logged out. Waiting for processes to exit. Apr 20 21:09:33.592914 systemd-logind[1620]: Removed session 54. Apr 20 21:09:38.562040 systemd[1]: Started sshd@53-12303-10.0.0.6:22-10.0.0.1:48288.service - OpenSSH per-connection server daemon (10.0.0.1:48288). Apr 20 21:09:39.051836 sshd[7637]: Accepted publickey for core from 10.0.0.1 port 48288 ssh2: RSA SHA256:mNBOi5PT40PLmSEG5oVQyBTmAdi6uLWymhIPyYmLiRw Apr 20 21:09:39.056390 sshd-session[7637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 21:09:39.075656 systemd-logind[1620]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 20 21:09:39.104886 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 20 21:09:39.443110 sshd[7641]: Connection closed by 10.0.0.1 port 48288 Apr 20 21:09:39.443412 sshd-session[7637]: pam_unix(sshd:session): session closed for user core Apr 20 21:09:39.453592 systemd[1]: sshd@53-12303-10.0.0.6:22-10.0.0.1:48288.service: Deactivated successfully. Apr 20 21:09:39.550694 systemd[1]: session-55.scope: Deactivated successfully. Apr 20 21:09:39.558441 systemd-logind[1620]: Session 55 logged out. Waiting for processes to exit. Apr 20 21:09:39.596683 systemd-logind[1620]: Removed session 55.