Apr 22 23:43:04.213622 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 22 21:57:11 -00 2026 Apr 22 23:43:04.213648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:43:04.213659 kernel: BIOS-provided physical RAM map: Apr 22 23:43:04.213664 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 22 23:43:04.213669 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 22 23:43:04.213674 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 22 23:43:04.213680 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 22 23:43:04.213685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 22 23:43:04.214490 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 22 23:43:04.214609 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 22 23:43:04.214669 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 22 23:43:04.214676 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 22 23:43:04.214681 kernel: NX (Execute Disable) protection: active Apr 22 23:43:04.214686 kernel: APIC: Static calls initialized Apr 22 23:43:04.214693 kernel: SMBIOS 2.8 present. Apr 22 23:43:04.214700 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 22 23:43:04.214753 kernel: DMI: Memory slots populated: 1/1 Apr 22 23:43:04.214760 kernel: Hypervisor detected: KVM Apr 22 23:43:04.214765 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 22 23:43:04.214770 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 22 23:43:04.214776 kernel: kvm-clock: using sched offset of 27433462807 cycles Apr 22 23:43:04.214782 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 22 23:43:04.214818 kernel: tsc: Detected 2793.438 MHz processor Apr 22 23:43:04.214824 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 22 23:43:04.214833 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 22 23:43:04.214839 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 22 23:43:04.214844 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 22 23:43:04.214850 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 22 23:43:04.214856 kernel: Using GB pages for direct mapping Apr 22 23:43:04.214862 kernel: ACPI: Early table checksum verification disabled Apr 22 23:43:04.214867 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 22 23:43:04.214875 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214881 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214887 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214892 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 22 23:43:04.214898 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214904 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214910 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214917 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:43:04.214923 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 22 23:43:04.214931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 22 23:43:04.214937 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 22 23:43:04.214943 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 22 23:43:04.214951 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 22 23:43:04.214957 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 22 23:43:04.214963 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 22 23:43:04.214969 kernel: No NUMA configuration found Apr 22 23:43:04.214975 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 22 23:43:04.214981 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 22 23:43:04.214987 kernel: Zone ranges: Apr 22 23:43:04.214995 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 22 23:43:04.215001 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 22 23:43:04.215007 kernel: Normal empty Apr 22 23:43:04.215012 kernel: Device empty Apr 22 23:43:04.215018 kernel: Movable zone start for each node Apr 22 23:43:04.215024 kernel: Early memory node ranges Apr 22 23:43:04.215030 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 22 23:43:04.215037 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 22 23:43:04.215043 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 22 23:43:04.215049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 22 23:43:04.215055 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 22 23:43:04.215061 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 22 23:43:04.215113 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 22 23:43:04.215120 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 22 23:43:04.215129 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 22 23:43:04.215135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 22 23:43:04.215141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 22 23:43:04.215191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 22 23:43:04.215197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 22 23:43:04.215203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 22 23:43:04.215209 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 22 23:43:04.215305 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 22 23:43:04.215314 kernel: TSC deadline timer available Apr 22 23:43:04.215320 kernel: CPU topo: Max. logical packages: 1 Apr 22 23:43:04.215326 kernel: CPU topo: Max. logical dies: 1 Apr 22 23:43:04.215332 kernel: CPU topo: Max. dies per package: 1 Apr 22 23:43:04.215338 kernel: CPU topo: Max. threads per core: 1 Apr 22 23:43:04.215344 kernel: CPU topo: Num. cores per package: 4 Apr 22 23:43:04.215350 kernel: CPU topo: Num. threads per package: 4 Apr 22 23:43:04.215358 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 22 23:43:04.215364 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 22 23:43:04.215370 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 22 23:43:04.215376 kernel: kvm-guest: setup PV sched yield Apr 22 23:43:04.215381 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 22 23:43:04.215387 kernel: Booting paravirtualized kernel on KVM Apr 22 23:43:04.215394 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 22 23:43:04.215401 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 22 23:43:04.215407 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 22 23:43:04.215413 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 22 23:43:04.215419 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 22 23:43:04.215425 kernel: kvm-guest: PV spinlocks enabled Apr 22 23:43:04.215431 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 22 23:43:04.215437 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:43:04.215445 kernel: random: crng init done Apr 22 23:43:04.215451 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 22 23:43:04.215457 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 22 23:43:04.215463 kernel: Fallback order for Node 0: 0 Apr 22 23:43:04.215469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 22 23:43:04.215475 kernel: Policy zone: DMA32 Apr 22 23:43:04.215481 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 22 23:43:04.215489 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 22 23:43:04.215495 kernel: ftrace: allocating 40157 entries in 157 pages Apr 22 23:43:04.215501 kernel: ftrace: allocated 157 pages with 5 groups Apr 22 23:43:04.215507 kernel: Dynamic Preempt: voluntary Apr 22 23:43:04.215512 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 22 23:43:04.215519 kernel: rcu: RCU event tracing is enabled. Apr 22 23:43:04.215525 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 22 23:43:04.215533 kernel: Trampoline variant of Tasks RCU enabled. Apr 22 23:43:04.215539 kernel: Rude variant of Tasks RCU enabled. Apr 22 23:43:04.215637 kernel: Tracing variant of Tasks RCU enabled. Apr 22 23:43:04.215643 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 22 23:43:04.215649 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 22 23:43:04.215655 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:43:04.215661 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:43:04.215669 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:43:04.215675 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 22 23:43:04.215681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 22 23:43:04.215687 kernel: Console: colour VGA+ 80x25 Apr 22 23:43:04.215699 kernel: printk: legacy console [ttyS0] enabled Apr 22 23:43:04.215707 kernel: ACPI: Core revision 20240827 Apr 22 23:43:04.215714 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 22 23:43:04.215720 kernel: APIC: Switch to symmetric I/O mode setup Apr 22 23:43:04.215726 kernel: x2apic enabled Apr 22 23:43:04.215734 kernel: APIC: Switched APIC routing to: physical x2apic Apr 22 23:43:04.215741 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 22 23:43:04.215791 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 22 23:43:04.215798 kernel: kvm-guest: setup PV IPIs Apr 22 23:43:04.215807 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 22 23:43:04.215814 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:43:04.215821 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 22 23:43:04.215827 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 22 23:43:04.215833 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 22 23:43:04.215840 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 22 23:43:04.215846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 22 23:43:04.215854 kernel: Spectre V2 : Mitigation: Retpolines Apr 22 23:43:04.215860 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 22 23:43:04.215867 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 22 23:43:04.215874 kernel: RETBleed: Vulnerable Apr 22 23:43:04.215880 kernel: Speculative Store Bypass: Vulnerable Apr 22 23:43:04.215886 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 22 23:43:04.215940 kernel: GDS: Unknown: Dependent on hypervisor status Apr 22 23:43:04.215950 kernel: active return thunk: its_return_thunk Apr 22 23:43:04.215956 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 22 23:43:04.215962 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 22 23:43:04.215969 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 22 23:43:04.215975 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 22 23:43:04.215982 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 22 23:43:04.215988 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 22 23:43:04.215996 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 22 23:43:04.216002 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 22 23:43:04.216008 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 22 23:43:04.216015 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 22 23:43:04.216021 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 22 23:43:04.216028 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 22 23:43:04.216034 kernel: Freeing SMP alternatives memory: 32K Apr 22 23:43:04.216088 kernel: pid_max: default: 32768 minimum: 301 Apr 22 23:43:04.216095 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 22 23:43:04.216101 kernel: landlock: Up and running. Apr 22 23:43:04.216108 kernel: SELinux: Initializing. Apr 22 23:43:04.216114 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:43:04.216166 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:43:04.216173 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 22 23:43:04.216182 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 22 23:43:04.216189 kernel: signal: max sigframe size: 3632 Apr 22 23:43:04.216195 kernel: rcu: Hierarchical SRCU implementation. Apr 22 23:43:04.216202 kernel: rcu: Max phase no-delay instances is 400. Apr 22 23:43:04.216208 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 22 23:43:04.216214 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 22 23:43:04.216310 kernel: smp: Bringing up secondary CPUs ... Apr 22 23:43:04.216360 kernel: smpboot: x86: Booting SMP configuration: Apr 22 23:43:04.216370 kernel: .... node #0, CPUs: #1 #2 #3 Apr 22 23:43:04.216377 kernel: smp: Brought up 1 node, 4 CPUs Apr 22 23:43:04.216383 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 22 23:43:04.216390 kernel: Memory: 2444332K/2571752K available (14336K kernel code, 2453K rwdata, 31656K rodata, 15552K init, 2472K bss, 121532K reserved, 0K cma-reserved) Apr 22 23:43:04.216397 kernel: devtmpfs: initialized Apr 22 23:43:04.216403 kernel: x86/mm: Memory block size: 128MB Apr 22 23:43:04.216410 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 22 23:43:04.216418 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 22 23:43:04.216424 kernel: pinctrl core: initialized pinctrl subsystem Apr 22 23:43:04.216430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 22 23:43:04.216437 kernel: audit: initializing netlink subsys (disabled) Apr 22 23:43:04.216443 kernel: audit: type=2000 audit(1776901367.455:1): state=initialized audit_enabled=0 res=1 Apr 22 23:43:04.216449 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 22 23:43:04.216456 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 22 23:43:04.216464 kernel: cpuidle: using governor menu Apr 22 23:43:04.216470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 22 23:43:04.216477 kernel: dca service started, version 1.12.1 Apr 22 23:43:04.216483 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 22 23:43:04.216489 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 22 23:43:04.216496 kernel: PCI: Using configuration type 1 for base access Apr 22 23:43:04.216502 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 22 23:43:04.216510 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 22 23:43:04.216517 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 22 23:43:04.216523 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 22 23:43:04.216529 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 22 23:43:04.216536 kernel: ACPI: Added _OSI(Module Device) Apr 22 23:43:04.216591 kernel: ACPI: Added _OSI(Processor Device) Apr 22 23:43:04.216598 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 22 23:43:04.216607 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 22 23:43:04.216613 kernel: ACPI: Interpreter enabled Apr 22 23:43:04.216620 kernel: ACPI: PM: (supports S0 S3 S5) Apr 22 23:43:04.216626 kernel: ACPI: Using IOAPIC for interrupt routing Apr 22 23:43:04.216632 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 22 23:43:04.216639 kernel: PCI: Using E820 reservations for host bridge windows Apr 22 23:43:04.216645 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 22 23:43:04.216653 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 22 23:43:04.217039 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 22 23:43:04.217132 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 22 23:43:04.218192 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 22 23:43:04.218215 kernel: PCI host bridge to bus 0000:00 Apr 22 23:43:04.218535 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 22 23:43:04.218684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 22 23:43:04.219169 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 22 23:43:04.219495 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 22 23:43:04.219633 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 22 23:43:04.219708 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 22 23:43:04.219787 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 22 23:43:04.219889 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 22 23:43:04.220179 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 22 23:43:04.220819 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 22 23:43:04.220921 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 22 23:43:04.221003 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 22 23:43:04.221104 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 22 23:43:04.221216 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 19531 usecs Apr 22 23:43:04.222914 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 22 23:43:04.223011 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 22 23:43:04.223096 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 22 23:43:04.223184 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 22 23:43:04.224191 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 22 23:43:04.224983 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 22 23:43:04.225098 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 22 23:43:04.225195 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 22 23:43:04.231535 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 22 23:43:04.231699 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 22 23:43:04.231780 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 22 23:43:04.231860 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 22 23:43:04.231939 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 22 23:43:04.232026 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 22 23:43:04.232108 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 22 23:43:04.232186 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Apr 22 23:43:04.233387 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 22 23:43:04.233515 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 22 23:43:04.234365 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 22 23:43:04.234472 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 22 23:43:04.234644 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 22 23:43:04.234654 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 22 23:43:04.234661 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 22 23:43:04.234668 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 22 23:43:04.234674 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 22 23:43:04.234681 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 22 23:43:04.234688 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 22 23:43:04.234697 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 22 23:43:04.234704 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 22 23:43:04.234711 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 22 23:43:04.234717 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 22 23:43:04.234724 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 22 23:43:04.234731 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 22 23:43:04.234737 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 22 23:43:04.234745 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 22 23:43:04.234752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 22 23:43:04.234758 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 22 23:43:04.234765 kernel: iommu: Default domain type: Translated Apr 22 23:43:04.234771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 22 23:43:04.234778 kernel: PCI: Using ACPI for IRQ routing Apr 22 23:43:04.234785 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 22 23:43:04.234793 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 22 23:43:04.234800 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 22 23:43:04.234884 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 22 23:43:04.234965 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 22 23:43:04.235046 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 22 23:43:04.235054 kernel: vgaarb: loaded Apr 22 23:43:04.235061 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 22 23:43:04.235070 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 22 23:43:04.235076 kernel: clocksource: Switched to clocksource kvm-clock Apr 22 23:43:04.235083 kernel: VFS: Disk quotas dquot_6.6.0 Apr 22 23:43:04.235089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 22 23:43:04.235096 kernel: pnp: PnP ACPI init Apr 22 23:43:04.235186 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 22 23:43:04.235196 kernel: pnp: PnP ACPI: found 6 devices Apr 22 23:43:04.235205 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 22 23:43:04.235211 kernel: NET: Registered PF_INET protocol family Apr 22 23:43:04.235329 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 22 23:43:04.235346 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 22 23:43:04.235359 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 22 23:43:04.235372 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 22 23:43:04.235385 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 22 23:43:04.235405 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 22 23:43:04.235418 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:43:04.235430 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:43:04.235442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 22 23:43:04.235457 kernel: NET: Registered PF_XDP protocol family Apr 22 23:43:04.235657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 22 23:43:04.235735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 22 23:43:04.235814 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 22 23:43:04.235889 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 22 23:43:04.235963 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 22 23:43:04.236037 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 22 23:43:04.236045 kernel: PCI: CLS 0 bytes, default 64 Apr 22 23:43:04.236052 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 22 23:43:04.236061 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:43:04.236068 kernel: Initialise system trusted keyrings Apr 22 23:43:04.236075 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 22 23:43:04.236082 kernel: Key type asymmetric registered Apr 22 23:43:04.236088 kernel: Asymmetric key parser 'x509' registered Apr 22 23:43:04.236095 kernel: hrtimer: interrupt took 5992988 ns Apr 22 23:43:04.236101 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 22 23:43:04.236110 kernel: io scheduler mq-deadline registered Apr 22 23:43:04.236116 kernel: io scheduler kyber registered Apr 22 23:43:04.236123 kernel: io scheduler bfq registered Apr 22 23:43:04.236129 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 22 23:43:04.236137 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 22 23:43:04.236143 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 22 23:43:04.236150 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 22 23:43:04.236158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 22 23:43:04.236164 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 22 23:43:04.236171 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 22 23:43:04.236178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 22 23:43:04.236184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 22 23:43:04.236380 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 22 23:43:04.236391 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 22 23:43:04.236470 kernel: rtc_cmos 00:04: registered as rtc0 Apr 22 23:43:04.236605 kernel: rtc_cmos 00:04: setting system clock to 2026-04-22T23:42:56 UTC (1776901376) Apr 22 23:43:04.236685 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 22 23:43:04.236694 kernel: intel_pstate: CPU model not supported Apr 22 23:43:04.236700 kernel: NET: Registered PF_INET6 protocol family Apr 22 23:43:04.236707 kernel: Segment Routing with IPv6 Apr 22 23:43:04.236713 kernel: In-situ OAM (IOAM) with IPv6 Apr 22 23:43:04.236722 kernel: NET: Registered PF_PACKET protocol family Apr 22 23:43:04.236729 kernel: Key type dns_resolver registered Apr 22 23:43:04.236735 kernel: IPI shorthand broadcast: enabled Apr 22 23:43:04.236742 kernel: sched_clock: Marking stable (7400224915, 1739251491)->(10307201042, -1167724636) Apr 22 23:43:04.236748 kernel: registered taskstats version 1 Apr 22 23:43:04.236755 kernel: Loading compiled-in X.509 certificates Apr 22 23:43:04.236761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 0793482f0b1477a4dee00a55cce942e30dec635a' Apr 22 23:43:04.236770 kernel: Demotion targets for Node 0: null Apr 22 23:43:04.236776 kernel: Key type .fscrypt registered Apr 22 23:43:04.236782 kernel: Key type fscrypt-provisioning registered Apr 22 23:43:04.236789 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 22 23:43:04.236795 kernel: ima: Allocated hash algorithm: sha1 Apr 22 23:43:04.236802 kernel: ima: No architecture policies found Apr 22 23:43:04.236808 kernel: clk: Disabling unused clocks Apr 22 23:43:04.236816 kernel: Freeing unused kernel image (initmem) memory: 15552K Apr 22 23:43:04.236823 kernel: Write protecting the kernel read-only data: 47104k Apr 22 23:43:04.236830 kernel: Freeing unused kernel image (rodata/data gap) memory: 1112K Apr 22 23:43:04.236836 kernel: Run /init as init process Apr 22 23:43:04.236843 kernel: with arguments: Apr 22 23:43:04.236849 kernel: /init Apr 22 23:43:04.236856 kernel: with environment: Apr 22 23:43:04.236864 kernel: HOME=/ Apr 22 23:43:04.236870 kernel: TERM=linux Apr 22 23:43:04.236877 kernel: SCSI subsystem initialized Apr 22 23:43:04.236885 kernel: libata version 3.00 loaded. Apr 22 23:43:04.236975 kernel: ahci 0000:00:1f.2: version 3.0 Apr 22 23:43:04.236986 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 22 23:43:04.237066 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 22 23:43:04.237148 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 22 23:43:04.237332 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 22 23:43:04.237652 kernel: scsi host0: ahci Apr 22 23:43:04.237742 kernel: scsi host1: ahci Apr 22 23:43:04.237829 kernel: scsi host2: ahci Apr 22 23:43:04.237918 kernel: scsi host3: ahci Apr 22 23:43:04.238001 kernel: scsi host4: ahci Apr 22 23:43:04.238195 kernel: scsi host5: ahci Apr 22 23:43:04.238205 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Apr 22 23:43:04.238212 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Apr 22 23:43:04.238314 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Apr 22 23:43:04.238324 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Apr 22 23:43:04.238331 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Apr 22 23:43:04.238338 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Apr 22 23:43:04.238344 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 22 23:43:04.238351 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 22 23:43:04.238360 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 22 23:43:04.238367 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:43:04.238375 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 22 23:43:04.238382 kernel: ata3.00: applying bridge limits Apr 22 23:43:04.238389 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 22 23:43:04.238395 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 22 23:43:04.238402 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:43:04.238409 kernel: ata3.00: configured for UDMA/100 Apr 22 23:43:04.238415 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 22 23:43:04.238620 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 22 23:43:04.238716 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 22 23:43:04.238795 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 22 23:43:04.238883 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 22 23:43:04.238892 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 22 23:43:04.238899 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 22 23:43:04.238908 kernel: GPT:16515071 != 27000831 Apr 22 23:43:04.238915 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 22 23:43:04.238921 kernel: GPT:16515071 != 27000831 Apr 22 23:43:04.238928 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 22 23:43:04.238934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 23:43:04.239019 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 22 23:43:04.239029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 22 23:43:04.239036 kernel: device-mapper: uevent: version 1.0.3 Apr 22 23:43:04.239043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 22 23:43:04.239050 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 22 23:43:04.239056 kernel: raid6: avx512x4 gen() 3266 MB/s Apr 22 23:43:04.239065 kernel: raid6: avx512x2 gen() 19371 MB/s Apr 22 23:43:04.239071 kernel: raid6: avx512x1 gen() 21459 MB/s Apr 22 23:43:04.239078 kernel: raid6: avx2x4 gen() 24206 MB/s Apr 22 23:43:04.239085 kernel: raid6: avx2x2 gen() 9367 MB/s Apr 22 23:43:04.239091 kernel: raid6: avx2x1 gen() 7595 MB/s Apr 22 23:43:04.239098 kernel: raid6: using algorithm avx2x4 gen() 24206 MB/s Apr 22 23:43:04.239104 kernel: raid6: .... xor() 6285 MB/s, rmw enabled Apr 22 23:43:04.239111 kernel: raid6: using avx512x2 recovery algorithm Apr 22 23:43:04.239119 kernel: xor: automatically using best checksumming function avx Apr 22 23:43:04.239126 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 22 23:43:04.239133 kernel: BTRFS: device fsid 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (181) Apr 22 23:43:04.239140 kernel: BTRFS info (device dm-0): first mount of filesystem 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd Apr 22 23:43:04.239147 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:43:04.239153 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 22 23:43:04.239162 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 22 23:43:04.239168 kernel: loop: module loaded Apr 22 23:43:04.239175 kernel: loop0: detected capacity change from 0 to 100560 Apr 22 23:43:04.239181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 22 23:43:04.239190 systemd[1]: Successfully made /usr/ read-only. Apr 22 23:43:04.239199 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:43:04.239209 systemd[1]: Detected virtualization kvm. Apr 22 23:43:04.239216 systemd[1]: Detected architecture x86-64. Apr 22 23:43:04.239849 systemd[1]: Running in initrd. Apr 22 23:43:04.239857 systemd[1]: No hostname configured, using default hostname. Apr 22 23:43:04.239864 systemd[1]: Hostname set to . Apr 22 23:43:04.239871 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:43:04.239879 systemd[1]: Queued start job for default target initrd.target. Apr 22 23:43:04.239909 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:43:04.239916 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:43:04.239924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:43:04.239932 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 22 23:43:04.239939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:43:04.239947 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 22 23:43:04.239958 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 22 23:43:04.239965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:43:04.239973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:43:04.240002 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:43:04.240010 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:43:04.240017 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:43:04.240026 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:43:04.240033 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:43:04.240040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:43:04.240047 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:43:04.240054 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:43:04.240062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 22 23:43:04.240069 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 22 23:43:04.240078 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:43:04.240085 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:43:04.240092 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:43:04.240099 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:43:04.240107 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 22 23:43:04.240114 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 22 23:43:04.240121 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:43:04.240130 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 22 23:43:04.240137 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 22 23:43:04.240145 systemd[1]: Starting systemd-fsck-usr.service... Apr 22 23:43:04.240152 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:43:04.240159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:43:04.240168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:43:04.240424 systemd-journald[318]: Collecting audit messages is enabled. Apr 22 23:43:04.240448 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 22 23:43:04.240456 kernel: audit: type=1130 audit(1776901384.238:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.240465 systemd-journald[318]: Journal started Apr 22 23:43:04.240482 systemd-journald[318]: Runtime Journal (/run/log/journal/c2437a255cb74b9db56af5972a45fc94) is 6M, max 48.1M, 42.1M free. Apr 22 23:43:04.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.256885 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:43:04.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.269026 kernel: audit: type=1130 audit(1776901384.267:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.272088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:43:04.313941 kernel: audit: type=1130 audit(1776901384.284:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.285619 systemd[1]: Finished systemd-fsck-usr.service. Apr 22 23:43:04.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.336015 kernel: audit: type=1130 audit(1776901384.320:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:04.347109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 23:43:04.372808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:43:04.386087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 22 23:43:04.411496 kernel: Bridge firewalling registered Apr 22 23:43:04.411114 systemd-modules-load[323]: Inserted module 'br_netfilter' Apr 22 23:43:04.413902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:43:05.321184 kernel: audit: type=1130 audit(1776901385.267:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.371155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:43:05.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.431695 kernel: audit: type=1130 audit(1776901385.409:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.433093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 23:43:05.439458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:43:05.479908 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 22 23:43:05.518129 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:43:05.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.593794 kernel: audit: type=1130 audit(1776901385.538:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.610076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:43:05.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.647802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:43:05.663485 kernel: audit: type=1130 audit(1776901385.612:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.681954 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:43:05.720111 kernel: audit: type=1130 audit(1776901385.663:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.720145 kernel: audit: type=1334 audit(1776901385.666:11): prog-id=6 op=LOAD Apr 22 23:43:05.666000 audit: BPF prog-id=6 op=LOAD Apr 22 23:43:05.730994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:43:05.743449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:43:05.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.768018 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 22 23:43:05.807778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:43:05.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:05.850364 dracut-cmdline[358]: dracut-109 Apr 22 23:43:05.874889 dracut-cmdline[358]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:43:06.020425 systemd-resolved[350]: Positive Trust Anchors: Apr 22 23:43:06.020473 systemd-resolved[350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:43:06.020477 systemd-resolved[350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:43:06.020508 systemd-resolved[350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:43:06.152412 systemd-resolved[350]: Defaulting to hostname 'linux'. Apr 22 23:43:06.166994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:43:06.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:06.178753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:43:06.885800 kernel: Loading iSCSI transport class v2.0-870. Apr 22 23:43:07.132043 kernel: iscsi: registered transport (tcp) Apr 22 23:43:07.343528 kernel: iscsi: registered transport (qla4xxx) Apr 22 23:43:07.344041 kernel: QLogic iSCSI HBA Driver Apr 22 23:43:07.559752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:43:07.647872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:43:07.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:07.662791 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:43:08.037507 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 22 23:43:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:08.052097 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 22 23:43:08.060171 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 22 23:43:08.157941 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:43:08.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:08.176000 audit: BPF prog-id=7 op=LOAD Apr 22 23:43:08.176000 audit: BPF prog-id=8 op=LOAD Apr 22 23:43:08.179083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:43:08.249479 systemd-udevd[580]: Using default interface naming scheme 'v257'. Apr 22 23:43:08.276216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:43:08.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:08.301897 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 22 23:43:08.380680 dracut-pre-trigger[628]: rd.md=0: removing MD RAID activation Apr 22 23:43:08.836740 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:43:08.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:08.864454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:43:09.061168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:43:09.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.111000 audit: BPF prog-id=9 op=LOAD Apr 22 23:43:09.136669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:43:09.348788 systemd-networkd[725]: lo: Link UP Apr 22 23:43:09.348799 systemd-networkd[725]: lo: Gained carrier Apr 22 23:43:09.494020 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 22 23:43:09.496142 kernel: audit: type=1130 audit(1776901389.366:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.496181 kernel: audit: type=1130 audit(1776901389.401:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.354922 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:43:09.397951 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:43:09.410436 systemd[1]: Reached target network.target - Network. Apr 22 23:43:09.509805 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 22 23:43:09.614440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 22 23:43:09.635971 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 22 23:43:09.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.670509 kernel: audit: type=1130 audit(1776901389.652:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.686421 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 22 23:43:09.703776 kernel: cryptd: max_cpu_qlen set to 1000 Apr 22 23:43:09.736101 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 22 23:43:09.774719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:43:09.811816 kernel: AES CTR mode by8 optimization enabled Apr 22 23:43:09.811941 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 22 23:43:09.815563 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:43:09.846978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:43:09.867385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:43:09.868039 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:43:09.868043 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:43:09.879794 systemd-networkd[725]: eth0: Link UP Apr 22 23:43:09.879904 systemd-networkd[725]: eth0: Gained carrier Apr 22 23:43:10.053703 kernel: audit: type=1131 audit(1776901390.023:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:10.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:09.879916 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:43:09.917944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 22 23:43:09.989671 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 22 23:43:10.010880 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:43:10.018920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:43:10.019045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:43:10.023807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:43:10.113440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:43:10.163026 disk-uuid[859]: Primary Header is updated. Apr 22 23:43:10.163026 disk-uuid[859]: Secondary Entries is updated. Apr 22 23:43:10.163026 disk-uuid[859]: Secondary Header is updated. Apr 22 23:43:10.225894 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:43:10.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:10.256994 kernel: audit: type=1130 audit(1776901390.235:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.259966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:43:11.296686 kernel: audit: type=1130 audit(1776901391.267:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.297960 disk-uuid[865]: Warning: The kernel is still using the old partition table. Apr 22 23:43:11.297960 disk-uuid[865]: The new table will be used at the next reboot or after you Apr 22 23:43:11.297960 disk-uuid[865]: run partprobe(8) or kpartx(8) Apr 22 23:43:11.297960 disk-uuid[865]: The operation has completed successfully. Apr 22 23:43:11.367030 kernel: audit: type=1130 audit(1776901391.327:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.367078 kernel: audit: type=1131 audit(1776901391.328:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.318428 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 22 23:43:11.318679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 22 23:43:11.330078 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 22 23:43:11.536750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Apr 22 23:43:11.550779 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:43:11.551021 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:43:11.581826 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:43:11.583636 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:43:11.627820 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:43:11.636049 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 22 23:43:11.664915 kernel: audit: type=1130 audit(1776901391.643:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:11.643744 systemd-networkd[725]: eth0: Gained IPv6LL Apr 22 23:43:11.647418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 22 23:43:12.050161 ignition[902]: Ignition 2.24.0 Apr 22 23:43:12.050545 ignition[902]: Stage: fetch-offline Apr 22 23:43:12.050673 ignition[902]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:43:12.050686 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:43:12.052211 ignition[902]: parsed url from cmdline: "" Apr 22 23:43:12.052216 ignition[902]: no config URL provided Apr 22 23:43:12.054077 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Apr 22 23:43:12.054136 ignition[902]: no config at "/usr/lib/ignition/user.ign" Apr 22 23:43:12.055118 ignition[902]: op(1): [started] loading QEMU firmware config module Apr 22 23:43:12.055128 ignition[902]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 22 23:43:12.214213 ignition[902]: op(1): [finished] loading QEMU firmware config module Apr 22 23:43:12.650973 ignition[902]: parsing config with SHA512: 7ea38aca8db0df5eb640bd8657984d6ddd0189b0a69973da206f3d5a759511dca6ce5047e7cfa37378474f4ac95f5bdfccb7e51991179e62fc64a4337cf69a2d Apr 22 23:43:12.671494 unknown[902]: fetched base config from "system" Apr 22 23:43:12.671509 unknown[902]: fetched user config from "qemu" Apr 22 23:43:12.710003 ignition[902]: fetch-offline: fetch-offline passed Apr 22 23:43:12.710712 ignition[902]: Ignition finished successfully Apr 22 23:43:12.765440 kernel: audit: type=1130 audit(1776901392.738:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:12.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:12.718119 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:43:12.739894 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 22 23:43:12.748539 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 22 23:43:13.056552 ignition[913]: Ignition 2.24.0 Apr 22 23:43:13.056707 ignition[913]: Stage: kargs Apr 22 23:43:13.056890 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:43:13.056898 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:43:13.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:13.079170 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 22 23:43:13.060468 ignition[913]: kargs: kargs passed Apr 22 23:43:13.060564 ignition[913]: Ignition finished successfully Apr 22 23:43:13.226122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 22 23:43:14.089998 ignition[921]: Ignition 2.24.0 Apr 22 23:43:14.090088 ignition[921]: Stage: disks Apr 22 23:43:14.090499 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:43:14.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:14.165992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 22 23:43:14.090509 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:43:14.169152 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 22 23:43:14.112023 ignition[921]: disks: disks passed Apr 22 23:43:14.227182 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 22 23:43:14.112525 ignition[921]: Ignition finished successfully Apr 22 23:43:14.275739 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:43:14.293249 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:43:14.369886 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:43:14.589660 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 22 23:43:15.140213 systemd-fsck[931]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 22 23:43:15.172185 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 22 23:43:15.275868 kernel: kauditd_printk_skb: 2 callbacks suppressed Apr 22 23:43:15.275903 kernel: audit: type=1130 audit(1776901395.172:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:15.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:15.275711 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 22 23:43:17.528902 kernel: EXT4-fs (vda9): mounted filesystem acb26ad1-a3c4-45b5-95a2-dde9b0966d3b r/w with ordered data mode. Quota mode: none. Apr 22 23:43:17.551712 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 22 23:43:17.676945 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 22 23:43:17.756778 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:43:17.790428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 22 23:43:17.842031 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 22 23:43:17.883125 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 22 23:43:17.883177 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:43:18.313463 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (941) Apr 22 23:43:18.365103 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 22 23:43:18.416100 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:43:18.426138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:43:18.540595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 22 23:43:18.566583 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:43:18.566713 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:43:18.776022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:43:29.491210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 22 23:43:29.641849 kernel: audit: type=1130 audit(1776901409.527:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:29.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:30.520182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 22 23:43:30.677200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 22 23:43:32.341962 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:43:32.408112 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 22 23:43:33.632500 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 22 23:43:33.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:33.707928 kernel: audit: type=1130 audit(1776901413.673:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:34.584171 ignition[1039]: INFO : Ignition 2.24.0 Apr 22 23:43:34.584171 ignition[1039]: INFO : Stage: mount Apr 22 23:43:34.789975 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:43:34.789975 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:43:34.789975 ignition[1039]: INFO : mount: mount passed Apr 22 23:43:34.789975 ignition[1039]: INFO : Ignition finished successfully Apr 22 23:43:34.912423 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 22 23:43:34.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:34.971117 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 22 23:43:35.031179 kernel: audit: type=1130 audit(1776901414.952:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:43:36.652865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:43:37.916601 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Apr 22 23:43:37.982513 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:43:37.984933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:43:38.074840 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:43:38.075406 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:43:38.086174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:43:42.151012 ignition[1071]: INFO : Ignition 2.24.0 Apr 22 23:43:42.151012 ignition[1071]: INFO : Stage: files Apr 22 23:43:42.202830 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:43:42.202830 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:43:42.202830 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Apr 22 23:43:42.300043 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 22 23:43:42.300043 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 22 23:43:42.416121 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 22 23:43:42.572831 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 22 23:43:42.881124 unknown[1071]: wrote ssh authorized keys file for user: core Apr 22 23:43:42.984693 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 22 23:43:43.074027 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:43:43.074027 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 22 23:43:44.156641 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 22 23:43:52.144394 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:43:52.172620 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:43:53.017075 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 22 23:43:53.251663 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 22 23:44:22.555216 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:44:22.555216 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 22 23:44:22.694777 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:44:22.779855 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:44:22.779855 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 22 23:44:22.779855 ignition[1071]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 22 23:44:22.966177 ignition[1071]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:44:22.966177 ignition[1071]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:44:22.966177 ignition[1071]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 22 23:44:22.966177 ignition[1071]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 22 23:44:24.865162 ignition[1071]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:44:25.144162 ignition[1071]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:44:25.175478 ignition[1071]: INFO : files: files passed Apr 22 23:44:25.175478 ignition[1071]: INFO : Ignition finished successfully Apr 22 23:44:25.582582 kernel: audit: type=1130 audit(1776901465.218:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:25.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:25.176032 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 22 23:44:25.339838 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 22 23:44:25.360852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 22 23:44:25.675636 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 22 23:44:25.676008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 22 23:44:25.864773 kernel: audit: type=1130 audit(1776901465.727:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:25.865154 kernel: audit: type=1131 audit(1776901465.787:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:25.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:25.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:26.276985 initrd-setup-root-after-ignition[1102]: grep: /sysroot/oem/oem-release: No such file or directory Apr 22 23:44:26.563204 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:44:26.563204 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:44:26.672557 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:44:26.771591 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:44:26.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:26.841528 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 22 23:44:26.880685 kernel: audit: type=1130 audit(1776901466.826:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:27.017735 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 22 23:44:30.251753 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 22 23:44:30.252186 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 22 23:44:30.502067 kernel: audit: type=1130 audit(1776901470.325:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:30.502143 kernel: audit: type=1131 audit(1776901470.325:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:30.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:30.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:30.511078 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 22 23:44:30.540530 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 22 23:44:30.560607 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 22 23:44:30.579176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 22 23:44:36.625901 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:44:36.730796 kernel: audit: type=1130 audit(1776901476.646:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:36.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:36.988782 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 22 23:44:39.040031 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:44:39.053850 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:44:39.078620 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:44:39.218006 systemd[1]: Stopped target timers.target - Timer Units. Apr 22 23:44:39.241133 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 22 23:44:39.251555 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:44:39.280538 kernel: audit: type=1131 audit(1776901479.261:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:39.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:39.282549 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 22 23:44:39.324780 systemd[1]: Stopped target basic.target - Basic System. Apr 22 23:44:39.366530 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 22 23:44:39.385831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:44:39.412697 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 22 23:44:39.427857 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:44:39.454860 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 22 23:44:39.474929 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:44:39.520079 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 22 23:44:39.578858 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 22 23:44:39.609015 systemd[1]: Stopped target swap.target - Swaps. Apr 22 23:44:39.652002 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 22 23:44:39.659384 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:44:39.715736 kernel: audit: type=1131 audit(1776901479.675:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:39.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:39.725555 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:44:39.827735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:44:39.871084 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 22 23:44:39.916181 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:44:39.960919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 22 23:44:39.974068 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 22 23:44:40.036168 kernel: audit: type=1131 audit(1776901479.986:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:39.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.036117 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 22 23:44:40.039000 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:44:40.115735 kernel: audit: type=1131 audit(1776901480.079:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.081668 systemd[1]: Stopped target paths.target - Path Units. Apr 22 23:44:40.116828 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 22 23:44:40.146666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:44:40.176113 systemd[1]: Stopped target slices.target - Slice Units. Apr 22 23:44:40.212896 systemd[1]: Stopped target sockets.target - Socket Units. Apr 22 23:44:40.320830 systemd[1]: iscsid.socket: Deactivated successfully. Apr 22 23:44:40.350788 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:44:40.420747 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 22 23:44:40.518159 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:44:40.545528 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 22 23:44:40.545767 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:44:40.627128 kernel: audit: type=1131 audit(1776901480.598:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.567632 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 22 23:44:40.580564 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:44:40.779845 kernel: audit: type=1131 audit(1776901480.751:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.694737 systemd[1]: ignition-files.service: Deactivated successfully. Apr 22 23:44:40.720686 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 22 23:44:40.809629 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 22 23:44:40.831589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 22 23:44:40.848549 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 22 23:44:40.901147 kernel: audit: type=1131 audit(1776901480.874:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.849647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:44:40.961630 kernel: audit: type=1131 audit(1776901480.909:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.901712 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 22 23:44:40.909561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:44:40.997910 kernel: audit: type=1131 audit(1776901480.975:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:40.909887 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 22 23:44:40.959335 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:44:41.126131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 22 23:44:41.135679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 22 23:44:41.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.286031 ignition[1128]: INFO : Ignition 2.24.0 Apr 22 23:44:41.286031 ignition[1128]: INFO : Stage: umount Apr 22 23:44:41.286031 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:44:41.286031 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:44:41.348823 ignition[1128]: INFO : umount: umount passed Apr 22 23:44:41.348823 ignition[1128]: INFO : Ignition finished successfully Apr 22 23:44:41.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.333070 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 22 23:44:41.349179 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 22 23:44:41.386818 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 22 23:44:41.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.392682 systemd[1]: Stopped target network.target - Network. Apr 22 23:44:41.401877 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 22 23:44:41.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.402056 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 22 23:44:41.413214 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 22 23:44:41.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.413366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 22 23:44:41.426929 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 22 23:44:41.467594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 22 23:44:41.499692 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 22 23:44:41.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.521814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 22 23:44:41.552039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 22 23:44:41.558858 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 22 23:44:41.730816 kernel: kauditd_printk_skb: 8 callbacks suppressed Apr 22 23:44:41.730845 kernel: audit: type=1131 audit(1776901481.708:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.573542 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 22 23:44:41.573623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 22 23:44:41.772662 kernel: audit: type=1131 audit(1776901481.743:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.687812 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 22 23:44:41.688897 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 22 23:44:41.710606 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 22 23:44:41.710726 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 22 23:44:41.918619 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 22 23:44:41.934033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 22 23:44:41.974703 kernel: audit: type=1131 audit(1776901481.949:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:41.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.123000 audit: BPF prog-id=6 op=UNLOAD Apr 22 23:44:42.125000 audit: BPF prog-id=9 op=UNLOAD Apr 22 23:44:42.153535 kernel: audit: type=1334 audit(1776901482.123:67): prog-id=6 op=UNLOAD Apr 22 23:44:42.153651 kernel: audit: type=1334 audit(1776901482.125:68): prog-id=9 op=UNLOAD Apr 22 23:44:42.153916 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 22 23:44:42.167808 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 22 23:44:42.167920 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:44:42.221834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 22 23:44:42.237944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 22 23:44:42.238210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:44:42.310146 kernel: audit: type=1131 audit(1776901482.267:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.268216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 22 23:44:42.382581 kernel: audit: type=1131 audit(1776901482.310:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.310178 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:44:42.433951 kernel: audit: type=1131 audit(1776901482.396:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.310611 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 22 23:44:42.310676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 22 23:44:42.396714 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:44:42.518417 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 22 23:44:42.581739 kernel: audit: type=1131 audit(1776901482.563:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.521170 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:44:42.581812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 22 23:44:42.581882 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 22 23:44:42.653841 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 22 23:44:42.671744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:44:42.819870 kernel: audit: type=1131 audit(1776901482.779:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.745764 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 22 23:44:42.745954 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:44:42.880176 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 22 23:44:42.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.882129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 22 23:44:42.961911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 22 23:44:43.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:42.962165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:44:43.177916 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 22 23:44:43.219733 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 22 23:44:43.257891 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:44:43.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.311175 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 22 23:44:43.313874 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:44:43.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.375768 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 22 23:44:43.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.377105 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:44:43.412832 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 22 23:44:43.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.412903 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:44:43.477537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:44:43.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.485822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:44:43.715730 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 22 23:44:43.717205 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 22 23:44:43.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.775834 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 22 23:44:43.827329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 22 23:44:43.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:43.878092 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 22 23:44:44.088197 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 22 23:44:44.779545 systemd[1]: Switching root. Apr 22 23:44:45.074067 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Apr 22 23:44:45.076043 systemd-journald[318]: Journal stopped Apr 22 23:44:58.687872 kernel: SELinux: policy capability network_peer_controls=1 Apr 22 23:44:58.688418 kernel: SELinux: policy capability open_perms=1 Apr 22 23:44:58.691534 kernel: SELinux: policy capability extended_socket_class=1 Apr 22 23:44:58.695604 kernel: SELinux: policy capability always_check_network=0 Apr 22 23:44:58.695647 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 22 23:44:58.695665 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 22 23:44:58.695681 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 22 23:44:58.695696 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 22 23:44:58.695715 kernel: SELinux: policy capability userspace_initial_context=0 Apr 22 23:44:58.695731 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 22 23:44:58.696588 kernel: audit: type=1403 audit(1776901486.849:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 22 23:44:58.696663 systemd[1]: Successfully loaded SELinux policy in 1.435014s. Apr 22 23:44:58.696685 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 56.559ms. Apr 22 23:44:58.696698 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:44:58.696711 systemd[1]: Detected virtualization kvm. Apr 22 23:44:58.696725 systemd[1]: Detected architecture x86-64. Apr 22 23:44:58.696736 systemd[1]: Detected first boot. Apr 22 23:44:58.696751 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:44:58.696762 kernel: audit: type=1334 audit(1776901487.955:86): prog-id=10 op=LOAD Apr 22 23:44:58.696772 kernel: audit: type=1334 audit(1776901487.955:87): prog-id=10 op=UNLOAD Apr 22 23:44:58.696783 kernel: audit: type=1334 audit(1776901487.956:88): prog-id=11 op=LOAD Apr 22 23:44:58.696795 kernel: audit: type=1334 audit(1776901487.956:89): prog-id=11 op=UNLOAD Apr 22 23:44:58.696807 zram_generator::config[1173]: No configuration found. Apr 22 23:44:58.696820 kernel: Guest personality initialized and is inactive Apr 22 23:44:58.696831 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 22 23:44:58.696841 kernel: Initialized host personality Apr 22 23:44:58.696851 kernel: NET: Registered PF_VSOCK protocol family Apr 22 23:44:58.696860 systemd[1]: Populated /etc with preset unit settings. Apr 22 23:44:58.696870 kernel: audit: type=1334 audit(1776901495.054:90): prog-id=12 op=LOAD Apr 22 23:44:58.696880 kernel: audit: type=1334 audit(1776901495.055:91): prog-id=3 op=UNLOAD Apr 22 23:44:58.696889 kernel: audit: type=1334 audit(1776901495.055:92): prog-id=13 op=LOAD Apr 22 23:44:58.696901 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 22 23:44:58.696911 kernel: audit: type=1334 audit(1776901495.055:93): prog-id=14 op=LOAD Apr 22 23:44:58.696921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 22 23:44:58.696931 kernel: audit: type=1334 audit(1776901495.055:94): prog-id=4 op=UNLOAD Apr 22 23:44:58.696940 kernel: audit: type=1334 audit(1776901495.055:95): prog-id=5 op=UNLOAD Apr 22 23:44:58.696952 kernel: audit: type=1131 audit(1776901495.058:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.696963 kernel: audit: type=1130 audit(1776901495.117:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.696974 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 22 23:44:58.696985 kernel: audit: type=1131 audit(1776901495.121:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.696997 kernel: audit: type=1334 audit(1776901495.145:99): prog-id=12 op=UNLOAD Apr 22 23:44:58.697012 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 22 23:44:58.699573 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 22 23:44:58.699657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 22 23:44:58.699667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 22 23:44:58.699681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 22 23:44:58.699695 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 22 23:44:58.699711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 22 23:44:58.699720 systemd[1]: Created slice user.slice - User and Session Slice. Apr 22 23:44:58.699735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:44:58.699745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:44:58.699754 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 22 23:44:58.699763 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 22 23:44:58.699773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 22 23:44:58.699782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:44:58.699791 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 22 23:44:58.699802 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:44:58.701620 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:44:58.701751 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 22 23:44:58.701764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 22 23:44:58.701776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 22 23:44:58.701793 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 22 23:44:58.701807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:44:58.701819 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:44:58.701835 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 22 23:44:58.701845 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:44:58.701856 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:44:58.701866 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 22 23:44:58.701877 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 22 23:44:58.701890 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 22 23:44:58.701907 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:44:58.701922 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 22 23:44:58.701933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:44:58.701945 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 22 23:44:58.701961 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 22 23:44:58.701978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:44:58.701989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:44:58.701999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 22 23:44:58.702012 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 22 23:44:58.702932 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 22 23:44:58.703680 systemd[1]: Mounting media.mount - External Media Directory... Apr 22 23:44:58.703700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:44:58.703715 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 22 23:44:58.703732 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 22 23:44:58.703746 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 22 23:44:58.703764 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 22 23:44:58.703778 systemd[1]: Reached target machines.target - Containers. Apr 22 23:44:58.703803 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 22 23:44:58.703824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:44:58.703841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:44:58.703858 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 22 23:44:58.703869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:44:58.703881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:44:58.703897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:44:58.703913 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 22 23:44:58.703933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:44:58.703950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 22 23:44:58.703965 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 22 23:44:58.703979 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 22 23:44:58.703988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 22 23:44:58.703998 systemd[1]: Stopped systemd-fsck-usr.service. Apr 22 23:44:58.704007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:44:58.704019 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:44:58.704089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:44:58.704099 kernel: ACPI: bus type drm_connector registered Apr 22 23:44:58.704107 kernel: fuse: init (API version 7.41) Apr 22 23:44:58.704117 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:44:58.704129 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 22 23:44:58.704138 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 22 23:44:58.704146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:44:58.704188 systemd-journald[1247]: Collecting audit messages is enabled. Apr 22 23:44:58.704215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:44:58.704315 systemd-journald[1247]: Journal started Apr 22 23:44:58.704345 systemd-journald[1247]: Runtime Journal (/run/log/journal/c2437a255cb74b9db56af5972a45fc94) is 6M, max 48.1M, 42.1M free. Apr 22 23:44:56.604000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 22 23:44:58.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.180000 audit: BPF prog-id=14 op=UNLOAD Apr 22 23:44:58.185000 audit: BPF prog-id=13 op=UNLOAD Apr 22 23:44:58.193000 audit: BPF prog-id=15 op=LOAD Apr 22 23:44:58.219000 audit: BPF prog-id=16 op=LOAD Apr 22 23:44:58.221000 audit: BPF prog-id=17 op=LOAD Apr 22 23:44:58.659000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 22 23:44:58.659000 audit[1247]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe57c56740 a2=4000 a3=0 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:44:58.659000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 22 23:44:55.040216 systemd[1]: Queued start job for default target multi-user.target. Apr 22 23:44:55.055924 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 22 23:44:55.058452 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 22 23:44:55.058923 systemd[1]: systemd-journald.service: Consumed 5.739s CPU time. Apr 22 23:44:58.740938 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:44:58.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.751975 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 22 23:44:58.761990 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 22 23:44:58.773964 systemd[1]: Mounted media.mount - External Media Directory. Apr 22 23:44:58.806777 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 22 23:44:58.826019 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 22 23:44:58.894942 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 22 23:44:58.919548 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 22 23:44:58.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.950479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:44:58.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:58.995990 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 22 23:44:59.008081 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 22 23:44:59.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.066944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:44:59.068304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:44:59.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.080996 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:44:59.096672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:44:59.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.165816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:44:59.169600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:44:59.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.185597 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 22 23:44:59.190184 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 22 23:44:59.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.213724 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:44:59.214512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:44:59.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.271751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:44:59.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.282930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:44:59.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.303773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 22 23:44:59.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.317157 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 22 23:44:59.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.422819 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:44:59.511399 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 22 23:44:59.555668 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 22 23:44:59.579883 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 22 23:44:59.587367 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 22 23:44:59.587492 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:44:59.612958 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 22 23:44:59.628711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:44:59.631145 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:44:59.636164 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 22 23:44:59.645833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 22 23:44:59.655013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:44:59.675806 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 22 23:44:59.683484 systemd-journald[1247]: Time spent on flushing to /var/log/journal/c2437a255cb74b9db56af5972a45fc94 is 117.776ms for 1153 entries. Apr 22 23:44:59.683484 systemd-journald[1247]: System Journal (/var/log/journal/c2437a255cb74b9db56af5972a45fc94) is 8M, max 163.5M, 155.5M free. Apr 22 23:44:59.825704 systemd-journald[1247]: Received client request to flush runtime journal. Apr 22 23:44:59.684632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:44:59.703439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:44:59.715170 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 22 23:44:59.801971 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 23:44:59.831164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:44:59.838877 kernel: loop1: detected capacity change from 0 to 50784 Apr 22 23:44:59.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.860135 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 22 23:44:59.871651 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 22 23:44:59.887805 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 22 23:44:59.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:44:59.918756 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 22 23:45:00.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.015093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:45:00.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.046025 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 22 23:45:00.046134 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 22 23:45:00.074013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:45:00.081189 kernel: loop2: detected capacity change from 0 to 111560 Apr 22 23:45:00.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.148718 kernel: kauditd_printk_skb: 34 callbacks suppressed Apr 22 23:45:00.149517 kernel: audit: type=1130 audit(1776901500.116:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.149781 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 22 23:45:00.226508 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 22 23:45:00.286647 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 22 23:45:00.331527 kernel: loop3: detected capacity change from 0 to 228704 Apr 22 23:45:00.471746 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 22 23:45:00.476355 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 22 23:45:00.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.513791 kernel: audit: type=1130 audit(1776901500.487:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.664766 kernel: loop4: detected capacity change from 0 to 50784 Apr 22 23:45:00.887494 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 22 23:45:00.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:00.962942 kernel: audit: type=1130 audit(1776901500.909:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:01.019695 kernel: loop5: detected capacity change from 0 to 111560 Apr 22 23:45:01.027000 audit: BPF prog-id=18 op=LOAD Apr 22 23:45:01.061724 kernel: audit: type=1334 audit(1776901501.027:135): prog-id=18 op=LOAD Apr 22 23:45:01.061000 audit: BPF prog-id=19 op=LOAD Apr 22 23:45:01.071028 kernel: audit: type=1334 audit(1776901501.061:136): prog-id=19 op=LOAD Apr 22 23:45:01.073205 kernel: audit: type=1334 audit(1776901501.061:137): prog-id=20 op=LOAD Apr 22 23:45:01.061000 audit: BPF prog-id=20 op=LOAD Apr 22 23:45:01.077911 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 22 23:45:01.177860 kernel: audit: type=1334 audit(1776901501.162:138): prog-id=21 op=LOAD Apr 22 23:45:01.162000 audit: BPF prog-id=21 op=LOAD Apr 22 23:45:01.207944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:45:01.292616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:45:01.371183 kernel: audit: type=1334 audit(1776901501.357:139): prog-id=22 op=LOAD Apr 22 23:45:01.376816 kernel: loop6: detected capacity change from 0 to 228704 Apr 22 23:45:01.376966 kernel: audit: type=1334 audit(1776901501.358:140): prog-id=23 op=LOAD Apr 22 23:45:01.357000 audit: BPF prog-id=22 op=LOAD Apr 22 23:45:01.358000 audit: BPF prog-id=23 op=LOAD Apr 22 23:45:01.358000 audit: BPF prog-id=24 op=LOAD Apr 22 23:45:01.389530 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 22 23:45:01.402981 kernel: audit: type=1334 audit(1776901501.358:141): prog-id=24 op=LOAD Apr 22 23:45:01.439000 audit: BPF prog-id=25 op=LOAD Apr 22 23:45:01.441000 audit: BPF prog-id=26 op=LOAD Apr 22 23:45:01.442000 audit: BPF prog-id=27 op=LOAD Apr 22 23:45:01.451526 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 22 23:45:01.571970 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Apr 22 23:45:01.571994 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Apr 22 23:45:01.608094 (sd-merge)[1316]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Apr 22 23:45:01.645134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:45:01.665641 (sd-merge)[1316]: Merged extensions into '/usr'. Apr 22 23:45:01.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:01.727160 systemd[1]: Reload requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Apr 22 23:45:01.748000 systemd[1]: Reloading... Apr 22 23:45:01.887616 systemd-nsresourced[1321]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 22 23:45:02.155195 zram_generator::config[1368]: No configuration found. Apr 22 23:45:02.217308 systemd-oomd[1318]: No swap; memory pressure usage will be degraded Apr 22 23:45:02.312398 systemd-resolved[1319]: Positive Trust Anchors: Apr 22 23:45:02.312413 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:45:02.312417 systemd-resolved[1319]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:45:02.312443 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:45:02.320653 systemd-resolved[1319]: Defaulting to hostname 'linux'. Apr 22 23:45:04.144637 systemd[1]: Reloading finished in 2389 ms. Apr 22 23:45:04.262561 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 22 23:45:04.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.276893 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 22 23:45:04.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.321424 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 22 23:45:04.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.363542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:45:04.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.378860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 22 23:45:04.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.421021 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 22 23:45:04.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:04.480328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:45:04.523206 systemd[1]: Starting ensure-sysext.service... Apr 22 23:45:04.537845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:45:04.549000 audit: BPF prog-id=8 op=UNLOAD Apr 22 23:45:04.549000 audit: BPF prog-id=7 op=UNLOAD Apr 22 23:45:04.564000 audit: BPF prog-id=28 op=LOAD Apr 22 23:45:04.564000 audit: BPF prog-id=29 op=LOAD Apr 22 23:45:04.567584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:45:04.613000 audit: BPF prog-id=30 op=LOAD Apr 22 23:45:04.613000 audit: BPF prog-id=25 op=UNLOAD Apr 22 23:45:04.613000 audit: BPF prog-id=31 op=LOAD Apr 22 23:45:04.617000 audit: BPF prog-id=32 op=LOAD Apr 22 23:45:04.617000 audit: BPF prog-id=26 op=UNLOAD Apr 22 23:45:04.617000 audit: BPF prog-id=27 op=UNLOAD Apr 22 23:45:04.624000 audit: BPF prog-id=33 op=LOAD Apr 22 23:45:04.669000 audit: BPF prog-id=15 op=UNLOAD Apr 22 23:45:04.669000 audit: BPF prog-id=34 op=LOAD Apr 22 23:45:04.670000 audit: BPF prog-id=35 op=LOAD Apr 22 23:45:04.670000 audit: BPF prog-id=16 op=UNLOAD Apr 22 23:45:04.670000 audit: BPF prog-id=17 op=UNLOAD Apr 22 23:45:04.670000 audit: BPF prog-id=36 op=LOAD Apr 22 23:45:04.670000 audit: BPF prog-id=22 op=UNLOAD Apr 22 23:45:04.671000 audit: BPF prog-id=37 op=LOAD Apr 22 23:45:04.671000 audit: BPF prog-id=38 op=LOAD Apr 22 23:45:04.671000 audit: BPF prog-id=23 op=UNLOAD Apr 22 23:45:04.671000 audit: BPF prog-id=24 op=UNLOAD Apr 22 23:45:04.673000 audit: BPF prog-id=39 op=LOAD Apr 22 23:45:04.673000 audit: BPF prog-id=18 op=UNLOAD Apr 22 23:45:04.673000 audit: BPF prog-id=40 op=LOAD Apr 22 23:45:04.673000 audit: BPF prog-id=41 op=LOAD Apr 22 23:45:04.673000 audit: BPF prog-id=19 op=UNLOAD Apr 22 23:45:04.673000 audit: BPF prog-id=20 op=UNLOAD Apr 22 23:45:04.673916 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 22 23:45:04.673946 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 22 23:45:04.674373 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 22 23:45:04.674000 audit: BPF prog-id=42 op=LOAD Apr 22 23:45:04.674000 audit: BPF prog-id=21 op=UNLOAD Apr 22 23:45:04.675432 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Apr 22 23:45:04.675532 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Apr 22 23:45:04.682514 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:45:04.682521 systemd-tmpfiles[1404]: Skipping /boot Apr 22 23:45:04.685914 systemd[1]: Reload requested from client PID 1403 ('systemctl') (unit ensure-sysext.service)... Apr 22 23:45:04.685995 systemd[1]: Reloading... Apr 22 23:45:04.708959 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:45:04.709028 systemd-tmpfiles[1404]: Skipping /boot Apr 22 23:45:04.729149 systemd-udevd[1405]: Using default interface naming scheme 'v257'. Apr 22 23:45:04.944314 zram_generator::config[1437]: No configuration found. Apr 22 23:45:05.388027 kernel: mousedev: PS/2 mouse device common for all mice Apr 22 23:45:05.696839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 22 23:45:05.715760 kernel: ACPI: button: Power Button [PWRF] Apr 22 23:45:05.986551 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 22 23:45:05.987039 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 22 23:45:06.836699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:45:06.843625 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 22 23:45:06.843712 systemd[1]: Reloading finished in 2157 ms. Apr 22 23:45:06.856783 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:45:06.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:06.894903 kernel: kauditd_printk_skb: 40 callbacks suppressed Apr 22 23:45:06.909975 kernel: audit: type=1130 audit(1776901506.870:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:06.917998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:45:06.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:07.009895 kernel: audit: type=1130 audit(1776901506.990:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:07.081140 kernel: audit: type=1334 audit(1776901507.055:184): prog-id=43 op=LOAD Apr 22 23:45:07.118952 kernel: audit: type=1334 audit(1776901507.064:185): prog-id=44 op=LOAD Apr 22 23:45:07.119024 kernel: audit: type=1334 audit(1776901507.064:186): prog-id=28 op=UNLOAD Apr 22 23:45:07.055000 audit: BPF prog-id=43 op=LOAD Apr 22 23:45:07.200026 kernel: audit: type=1334 audit(1776901507.064:187): prog-id=29 op=UNLOAD Apr 22 23:45:07.200135 kernel: audit: type=1334 audit(1776901507.066:188): prog-id=45 op=LOAD Apr 22 23:45:07.200149 kernel: audit: type=1334 audit(1776901507.071:189): prog-id=36 op=UNLOAD Apr 22 23:45:07.064000 audit: BPF prog-id=44 op=LOAD Apr 22 23:45:07.254459 kernel: audit: type=1334 audit(1776901507.071:190): prog-id=46 op=LOAD Apr 22 23:45:07.254610 kernel: audit: type=1334 audit(1776901507.071:191): prog-id=47 op=LOAD Apr 22 23:45:07.064000 audit: BPF prog-id=28 op=UNLOAD Apr 22 23:45:07.064000 audit: BPF prog-id=29 op=UNLOAD Apr 22 23:45:07.066000 audit: BPF prog-id=45 op=LOAD Apr 22 23:45:07.071000 audit: BPF prog-id=36 op=UNLOAD Apr 22 23:45:07.071000 audit: BPF prog-id=46 op=LOAD Apr 22 23:45:07.071000 audit: BPF prog-id=47 op=LOAD Apr 22 23:45:07.071000 audit: BPF prog-id=37 op=UNLOAD Apr 22 23:45:07.071000 audit: BPF prog-id=38 op=UNLOAD Apr 22 23:45:07.144000 audit: BPF prog-id=48 op=LOAD Apr 22 23:45:07.206000 audit: BPF prog-id=39 op=UNLOAD Apr 22 23:45:07.206000 audit: BPF prog-id=49 op=LOAD Apr 22 23:45:07.206000 audit: BPF prog-id=50 op=LOAD Apr 22 23:45:07.206000 audit: BPF prog-id=40 op=UNLOAD Apr 22 23:45:07.206000 audit: BPF prog-id=41 op=UNLOAD Apr 22 23:45:07.226000 audit: BPF prog-id=51 op=LOAD Apr 22 23:45:07.226000 audit: BPF prog-id=33 op=UNLOAD Apr 22 23:45:07.269000 audit: BPF prog-id=52 op=LOAD Apr 22 23:45:07.270000 audit: BPF prog-id=53 op=LOAD Apr 22 23:45:07.270000 audit: BPF prog-id=34 op=UNLOAD Apr 22 23:45:07.270000 audit: BPF prog-id=35 op=UNLOAD Apr 22 23:45:07.397000 audit: BPF prog-id=54 op=LOAD Apr 22 23:45:07.400000 audit: BPF prog-id=42 op=UNLOAD Apr 22 23:45:07.415000 audit: BPF prog-id=55 op=LOAD Apr 22 23:45:07.423000 audit: BPF prog-id=30 op=UNLOAD Apr 22 23:45:07.427000 audit: BPF prog-id=56 op=LOAD Apr 22 23:45:07.431000 audit: BPF prog-id=57 op=LOAD Apr 22 23:45:07.431000 audit: BPF prog-id=31 op=UNLOAD Apr 22 23:45:07.431000 audit: BPF prog-id=32 op=UNLOAD Apr 22 23:45:09.672187 systemd[1]: Finished ensure-sysext.service. Apr 22 23:45:09.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:10.817826 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:45:10.847505 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 22 23:45:10.863020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 22 23:45:10.879895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:45:10.947987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:45:10.976497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:45:11.011910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:45:11.120131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:45:11.137915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:45:11.138204 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:45:11.152355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 22 23:45:11.175639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 22 23:45:11.193208 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:45:11.224447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 22 23:45:11.241000 audit: BPF prog-id=58 op=LOAD Apr 22 23:45:11.243719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:45:11.245000 audit: BPF prog-id=59 op=LOAD Apr 22 23:45:11.248444 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 22 23:45:11.269767 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 22 23:45:11.431974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:45:11.456559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:45:11.468000 audit[1545]: SYSTEM_BOOT pid=1545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.462053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:45:11.463597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:45:11.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.515570 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:45:11.522689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:45:11.523652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:45:11.523931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:45:11.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.551875 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:45:11.554839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:45:11.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.565450 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 22 23:45:11.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.580917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 22 23:45:11.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:45:11.877042 augenrules[1553]: No rules Apr 22 23:45:11.872000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 22 23:45:11.878757 systemd[1]: audit-rules.service: Deactivated successfully. Apr 22 23:45:11.883319 kernel: kauditd_printk_skb: 36 callbacks suppressed Apr 22 23:45:11.885850 kernel: audit: type=1305 audit(1776901511.872:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 22 23:45:11.872000 audit[1553]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6021e480 a2=420 a3=0 items=0 ppid=1518 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:45:11.958330 kernel: audit: type=1300 audit(1776901511.872:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6021e480 a2=420 a3=0 items=0 ppid=1518 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:45:11.872000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 22 23:45:11.972179 kernel: audit: type=1327 audit(1776901511.872:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 22 23:45:12.002903 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 22 23:45:12.022560 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 22 23:45:12.089362 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 22 23:45:12.154323 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:45:12.154610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:45:12.154633 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 22 23:45:13.229902 systemd-networkd[1542]: lo: Link UP Apr 22 23:45:13.229913 systemd-networkd[1542]: lo: Gained carrier Apr 22 23:45:13.238032 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:45:13.239061 systemd-networkd[1542]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:45:13.239065 systemd-networkd[1542]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:45:13.243917 systemd-networkd[1542]: eth0: Link UP Apr 22 23:45:13.245198 systemd-networkd[1542]: eth0: Gained carrier Apr 22 23:45:13.245701 systemd-networkd[1542]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:45:13.272601 systemd-networkd[1542]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:45:13.280666 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Apr 22 23:45:13.877107 systemd-resolved[1319]: Clock change detected. Flushing caches. Apr 22 23:45:13.880004 systemd-timesyncd[1543]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 22 23:45:13.880053 systemd-timesyncd[1543]: Initial clock synchronization to Wed 2026-04-22 23:45:13.875070 UTC. Apr 22 23:45:14.499025 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 22 23:45:14.673578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:45:14.759321 systemd[1]: Reached target network.target - Network. Apr 22 23:45:14.770087 systemd[1]: Reached target time-set.target - System Time Set. Apr 22 23:45:14.848662 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 22 23:45:15.064143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 22 23:45:15.561007 systemd-networkd[1542]: eth0: Gained IPv6LL Apr 22 23:45:15.601509 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 22 23:45:15.730198 systemd[1]: Reached target network-online.target - Network is Online. Apr 22 23:45:15.855698 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 22 23:45:20.942555 ldconfig[1533]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 22 23:45:21.146785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 22 23:45:21.176021 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 22 23:45:22.066884 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 22 23:45:22.169049 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:45:22.195828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 22 23:45:22.211769 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 22 23:45:22.257263 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 22 23:45:22.289182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 22 23:45:22.368553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 22 23:45:22.394056 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 22 23:45:22.412732 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 22 23:45:22.435744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 22 23:45:22.498056 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 22 23:45:22.548268 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:45:22.593704 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:45:22.895041 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 22 23:45:23.910099 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 22 23:45:24.488862 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 22 23:45:24.608891 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 22 23:45:24.659623 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 22 23:45:24.738689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 22 23:45:24.860096 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 22 23:45:24.920100 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 22 23:45:24.956484 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:45:24.966960 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:45:25.000930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:45:25.001141 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:45:25.099240 systemd[1]: Starting containerd.service - containerd container runtime... Apr 22 23:45:25.136260 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 22 23:45:25.156214 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 22 23:45:25.195394 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 22 23:45:25.308155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 22 23:45:25.370387 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 22 23:45:25.383903 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 22 23:45:25.400777 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 22 23:45:25.411820 jq[1590]: false Apr 22 23:45:25.418687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:45:25.441052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 22 23:45:25.462798 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 22 23:45:25.464729 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Apr 22 23:45:25.465964 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Apr 22 23:45:25.506814 extend-filesystems[1591]: Found /dev/vda6 Apr 22 23:45:25.557196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 22 23:45:25.595690 oslogin_cache_refresh[1592]: Failure getting users, quitting Apr 22 23:45:25.606586 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Apr 22 23:45:25.606586 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:45:25.606586 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Apr 22 23:45:25.595780 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:45:25.606706 extend-filesystems[1591]: Found /dev/vda9 Apr 22 23:45:25.698807 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Apr 22 23:45:25.698807 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:45:25.595905 oslogin_cache_refresh[1592]: Refreshing group entry cache Apr 22 23:45:25.700572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 22 23:45:25.606644 oslogin_cache_refresh[1592]: Failure getting groups, quitting Apr 22 23:45:25.606656 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:45:25.738024 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 22 23:45:25.937369 extend-filesystems[1591]: Checking size of /dev/vda9 Apr 22 23:45:26.117587 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 22 23:45:26.156932 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 22 23:45:26.303867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 22 23:45:26.348118 systemd[1]: Starting update-engine.service - Update Engine... Apr 22 23:45:26.489282 extend-filesystems[1591]: Resized partition /dev/vda9 Apr 22 23:45:26.493366 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 22 23:45:26.746584 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 22 23:45:26.754528 extend-filesystems[1625]: resize2fs 1.47.3 (8-Jul-2025) Apr 22 23:45:27.007169 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 22 23:45:26.990052 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 22 23:45:27.102207 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 22 23:45:27.115988 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 22 23:45:27.121737 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 22 23:45:27.143107 systemd[1]: motdgen.service: Deactivated successfully. Apr 22 23:45:27.157870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 22 23:45:27.195709 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 22 23:45:27.349546 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 22 23:45:27.358061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 22 23:45:28.070224 jq[1624]: true Apr 22 23:45:28.423053 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 22 23:45:28.438054 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 22 23:45:28.459777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 22 23:45:28.606194 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 22 23:45:29.066743 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 22 23:45:29.066743 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 22 23:45:29.066743 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 22 23:45:29.340150 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 22 23:45:29.862800 tar[1634]: linux-amd64/LICENSE Apr 22 23:45:29.862800 tar[1634]: linux-amd64/helm Apr 22 23:45:29.863237 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 22 23:45:29.863380 update_engine[1620]: I20260422 23:45:29.358240 1620 main.cc:92] Flatcar Update Engine starting Apr 22 23:45:29.863808 extend-filesystems[1591]: Resized filesystem in /dev/vda9 Apr 22 23:45:29.355880 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 22 23:45:30.016081 jq[1648]: true Apr 22 23:45:30.733781 systemd-logind[1615]: Watching system buttons on /dev/input/event2 (Power Button) Apr 22 23:45:30.733796 systemd-logind[1615]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 22 23:45:30.754651 systemd-logind[1615]: New seat seat0. Apr 22 23:45:30.904106 dbus-daemon[1588]: [system] SELinux support is enabled Apr 22 23:45:30.958691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 22 23:45:31.138847 systemd[1]: Started systemd-logind.service - User Login Management. Apr 22 23:45:31.152766 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 22 23:45:31.153154 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 22 23:45:31.163734 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 22 23:45:31.163856 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 22 23:45:31.563165 dbus-daemon[1588]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 22 23:45:31.656636 systemd[1]: Started update-engine.service - Update Engine. Apr 22 23:45:31.756194 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 22 23:45:32.212514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 22 23:45:32.268192 update_engine[1620]: I20260422 23:45:32.168151 1620 update_check_scheduler.cc:74] Next update check in 11m51s Apr 22 23:45:32.417848 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 22 23:45:33.445792 bash[1693]: Updated "/home/core/.ssh/authorized_keys" Apr 22 23:45:33.557969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 22 23:45:33.593932 systemd[1]: issuegen.service: Deactivated successfully. Apr 22 23:45:33.642220 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 22 23:45:33.689132 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 22 23:45:33.820206 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 22 23:45:36.462189 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 22 23:45:37.357830 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:49702.service - OpenSSH per-connection server daemon (10.0.0.1:49702). Apr 22 23:45:37.593014 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 22 23:45:37.863959 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 22 23:45:38.063162 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 22 23:45:38.108231 systemd[1]: Reached target getty.target - Login Prompts. Apr 22 23:45:38.770018 locksmithd[1685]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 22 23:45:39.893690 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:45:39.909120 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:45:40.194572 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 22 23:45:40.219180 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 22 23:45:40.242594 systemd-logind[1615]: New session 1 of user core. Apr 22 23:45:40.294098 containerd[1642]: time="2026-04-22T23:45:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 22 23:45:40.436911 containerd[1642]: time="2026-04-22T23:45:40.436275002Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Apr 22 23:45:40.512988 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 22 23:45:40.523238 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 22 23:45:40.603997 containerd[1642]: time="2026-04-22T23:45:40.603044647Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=3.285803ms Apr 22 23:45:40.603997 containerd[1642]: time="2026-04-22T23:45:40.603315050Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 22 23:45:40.603997 containerd[1642]: time="2026-04-22T23:45:40.603744371Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 22 23:45:40.603997 containerd[1642]: time="2026-04-22T23:45:40.603760990Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 22 23:45:40.604334 containerd[1642]: time="2026-04-22T23:45:40.604200274Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 22 23:45:40.604334 containerd[1642]: time="2026-04-22T23:45:40.604218017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:45:40.604494 containerd[1642]: time="2026-04-22T23:45:40.604345431Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:45:40.604533 containerd[1642]: time="2026-04-22T23:45:40.604508309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.606679 containerd[1642]: time="2026-04-22T23:45:40.606319128Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.606679 containerd[1642]: time="2026-04-22T23:45:40.606627307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:45:40.606679 containerd[1642]: time="2026-04-22T23:45:40.606705659Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:45:40.606679 containerd[1642]: time="2026-04-22T23:45:40.606713992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.607154 containerd[1642]: time="2026-04-22T23:45:40.607132464Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.607212 containerd[1642]: time="2026-04-22T23:45:40.607204686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 22 23:45:40.607660 containerd[1642]: time="2026-04-22T23:45:40.607640604Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.662642 containerd[1642]: time="2026-04-22T23:45:40.662297344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.662870 containerd[1642]: time="2026-04-22T23:45:40.662854297Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:45:40.662911 containerd[1642]: time="2026-04-22T23:45:40.662903739Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 22 23:45:40.666602 containerd[1642]: time="2026-04-22T23:45:40.666197512Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 22 23:45:40.669498 containerd[1642]: time="2026-04-22T23:45:40.669154309Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 22 23:45:40.669498 containerd[1642]: time="2026-04-22T23:45:40.669305840Z" level=info msg="metadata content store policy set" policy=shared Apr 22 23:45:40.723767 containerd[1642]: time="2026-04-22T23:45:40.723047821Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 22 23:45:40.733649 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:45:40.734237 containerd[1642]: time="2026-04-22T23:45:40.733901932Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:45:40.734237 containerd[1642]: time="2026-04-22T23:45:40.734115894Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:45:40.734237 containerd[1642]: time="2026-04-22T23:45:40.734137600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 22 23:45:40.734237 containerd[1642]: time="2026-04-22T23:45:40.734200572Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 22 23:45:40.734237 containerd[1642]: time="2026-04-22T23:45:40.734223063Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734276203Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734292013Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734313466Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734493395Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734520410Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734539365Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734553342Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 22 23:45:40.734618 containerd[1642]: time="2026-04-22T23:45:40.734612479Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 22 23:45:40.735594 containerd[1642]: time="2026-04-22T23:45:40.735570081Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742527882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742845268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742902623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742915376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742924682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.742953889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.743001601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.743051072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.743060981Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.743090726Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.743118241Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.748877203Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 22 23:45:40.749925 containerd[1642]: time="2026-04-22T23:45:40.748960366Z" level=info msg="Start snapshots syncer" Apr 22 23:45:40.753263 containerd[1642]: time="2026-04-22T23:45:40.750191262Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 22 23:45:40.757097 containerd[1642]: time="2026-04-22T23:45:40.755277553Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 22 23:45:40.757097 containerd[1642]: time="2026-04-22T23:45:40.756682850Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.756981169Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757352100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757562248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757575634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757583921Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757626516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757634626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757677486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757686433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 22 23:45:40.757815 containerd[1642]: time="2026-04-22T23:45:40.757694240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.759534286Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.759724570Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.759732975Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.759895704Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.759902619Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.760023158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.760099460Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.760198620Z" level=info msg="runtime interface created" Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.760206172Z" level=info msg="created NRI interface" Apr 22 23:45:40.760254 containerd[1642]: time="2026-04-22T23:45:40.760258552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 22 23:45:40.760212 systemd-logind[1615]: New session 2 of user core. Apr 22 23:45:40.760652 containerd[1642]: time="2026-04-22T23:45:40.760283169Z" level=info msg="Connect containerd service" Apr 22 23:45:40.760652 containerd[1642]: time="2026-04-22T23:45:40.760495823Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 22 23:45:40.767538 containerd[1642]: time="2026-04-22T23:45:40.766280845Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 22 23:45:42.390720 systemd[1726]: Queued start job for default target default.target. Apr 22 23:45:42.415156 systemd[1726]: Created slice app.slice - User Application Slice. Apr 22 23:45:42.415356 systemd[1726]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 22 23:45:42.415489 systemd[1726]: Reached target paths.target - Paths. Apr 22 23:45:42.415559 systemd[1726]: Reached target timers.target - Timers. Apr 22 23:45:42.434168 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 22 23:45:42.450545 systemd[1726]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 22 23:45:43.760902 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 22 23:45:43.761670 systemd[1726]: Reached target sockets.target - Sockets. Apr 22 23:45:43.937643 systemd[1726]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 22 23:45:43.937869 systemd[1726]: Reached target basic.target - Basic System. Apr 22 23:45:43.937919 systemd[1726]: Reached target default.target - Main User Target. Apr 22 23:45:43.937941 systemd[1726]: Startup finished in 3.027s. Apr 22 23:45:43.938920 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 22 23:45:44.010795 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 22 23:45:45.117088 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:56514.service - OpenSSH per-connection server daemon (10.0.0.1:56514). Apr 22 23:45:46.199287 containerd[1642]: time="2026-04-22T23:45:46.197163814Z" level=info msg="Start subscribing containerd event" Apr 22 23:45:46.346364 containerd[1642]: time="2026-04-22T23:45:46.303743057Z" level=info msg="Start recovering state" Apr 22 23:45:46.533315 containerd[1642]: time="2026-04-22T23:45:46.525297947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 22 23:45:46.558955 containerd[1642]: time="2026-04-22T23:45:46.557790888Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 22 23:45:46.659218 containerd[1642]: time="2026-04-22T23:45:46.607253891Z" level=info msg="Start event monitor" Apr 22 23:45:46.688994 containerd[1642]: time="2026-04-22T23:45:46.647371106Z" level=info msg="Start cni network conf syncer for default" Apr 22 23:45:46.699200 containerd[1642]: time="2026-04-22T23:45:46.694938800Z" level=info msg="Start streaming server" Apr 22 23:45:46.727817 containerd[1642]: time="2026-04-22T23:45:46.716236949Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 22 23:45:46.868366 containerd[1642]: time="2026-04-22T23:45:46.732341312Z" level=info msg="runtime interface starting up..." Apr 22 23:45:46.943047 containerd[1642]: time="2026-04-22T23:45:46.871975972Z" level=info msg="starting plugins..." Apr 22 23:45:47.498849 containerd[1642]: time="2026-04-22T23:45:47.497337751Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 22 23:45:47.691086 containerd[1642]: time="2026-04-22T23:45:47.603789425Z" level=info msg="containerd successfully booted in 7.332490s" Apr 22 23:45:47.693272 systemd[1]: Started containerd.service - containerd container runtime. Apr 22 23:45:48.546767 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 56514 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:45:48.605210 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:45:49.173391 tar[1634]: linux-amd64/README.md Apr 22 23:45:49.288997 systemd-logind[1615]: New session 3 of user core. Apr 22 23:45:49.401188 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 22 23:45:49.454117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 22 23:45:50.123865 sshd[1766]: Connection closed by 10.0.0.1 port 56514 Apr 22 23:45:50.139117 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 22 23:45:50.524927 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:56514.service: Deactivated successfully. Apr 22 23:45:50.539735 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:56514.service: Consumed 1.023s CPU time, 4.1M memory peak. Apr 22 23:45:50.609694 systemd[1]: session-3.scope: Deactivated successfully. Apr 22 23:45:50.672950 systemd-logind[1615]: Session 3 logged out. Waiting for processes to exit. Apr 22 23:45:50.754142 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:46136.service - OpenSSH per-connection server daemon (10.0.0.1:46136). Apr 22 23:45:51.045269 systemd-logind[1615]: Removed session 3. Apr 22 23:45:53.165138 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 46136 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:45:53.405182 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:45:55.392069 systemd-logind[1615]: New session 4 of user core. Apr 22 23:45:55.570855 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 22 23:45:57.668634 sshd[1776]: Connection closed by 10.0.0.1 port 46136 Apr 22 23:45:57.799895 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Apr 22 23:45:58.459014 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:46136.service: Deactivated successfully. Apr 22 23:45:58.533631 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:46136.service: Consumed 1.044s CPU time, 4.4M memory peak. Apr 22 23:45:59.104063 systemd[1]: session-4.scope: Deactivated successfully. Apr 22 23:45:59.200832 systemd[1]: session-4.scope: Consumed 1.266s CPU time, 2.5M memory peak. Apr 22 23:45:59.450045 systemd-logind[1615]: Session 4 logged out. Waiting for processes to exit. Apr 22 23:45:59.807346 systemd-logind[1615]: Removed session 4. Apr 22 23:46:04.539706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:46:04.567154 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 22 23:46:04.569620 systemd[1]: Startup finished in 12.084s (kernel) + 1min 44.169s (initrd) + 1min 18.549s (userspace) = 3min 14.802s. Apr 22 23:46:04.611853 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:46:07.724965 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:47706.service - OpenSSH per-connection server daemon (10.0.0.1:47706). Apr 22 23:46:08.023933 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 47706 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:46:08.033830 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:46:08.052790 systemd-logind[1615]: New session 5 of user core. Apr 22 23:46:08.074029 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 22 23:46:08.224018 sshd[1797]: Connection closed by 10.0.0.1 port 47706 Apr 22 23:46:08.232082 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Apr 22 23:46:08.259512 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:47706.service: Deactivated successfully. Apr 22 23:46:08.261361 systemd[1]: session-5.scope: Deactivated successfully. Apr 22 23:46:08.263121 systemd-logind[1615]: Session 5 logged out. Waiting for processes to exit. Apr 22 23:46:08.266782 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:47712.service - OpenSSH per-connection server daemon (10.0.0.1:47712). Apr 22 23:46:08.282758 systemd-logind[1615]: Removed session 5. Apr 22 23:46:08.574806 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 47712 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:46:08.587400 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:46:08.619077 systemd-logind[1615]: New session 6 of user core. Apr 22 23:46:08.625109 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 22 23:46:08.666129 sshd[1809]: Connection closed by 10.0.0.1 port 47712 Apr 22 23:46:08.669811 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Apr 22 23:46:08.821149 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:47712.service: Deactivated successfully. Apr 22 23:46:08.839646 systemd[1]: session-6.scope: Deactivated successfully. Apr 22 23:46:08.890854 systemd-logind[1615]: Session 6 logged out. Waiting for processes to exit. Apr 22 23:46:08.905263 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). Apr 22 23:46:08.907054 systemd-logind[1615]: Removed session 6. Apr 22 23:46:09.354140 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:46:09.357624 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:46:09.390964 systemd-logind[1615]: New session 7 of user core. Apr 22 23:46:09.404656 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 22 23:46:09.453681 sshd[1820]: Connection closed by 10.0.0.1 port 47728 Apr 22 23:46:09.454650 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Apr 22 23:46:09.569307 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:47728.service: Deactivated successfully. Apr 22 23:46:09.596296 systemd[1]: session-7.scope: Deactivated successfully. Apr 22 23:46:09.618707 systemd-logind[1615]: Session 7 logged out. Waiting for processes to exit. Apr 22 23:46:09.633976 kubelet[1786]: E0422 23:46:09.633622 1786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:46:09.635529 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Apr 22 23:46:09.637062 systemd-logind[1615]: Removed session 7. Apr 22 23:46:09.637569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:46:09.637717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:46:09.638163 systemd[1]: kubelet.service: Consumed 16.066s CPU time, 269.7M memory peak. Apr 22 23:46:09.943306 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 22 23:46:09.950044 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:46:09.987097 systemd-logind[1615]: New session 8 of user core. Apr 22 23:46:10.004562 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 22 23:46:10.211374 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 22 23:46:10.212260 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 23:46:11.665023 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 22 23:46:11.708090 (dockerd)[1853]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 22 23:46:12.719216 dockerd[1853]: time="2026-04-22T23:46:12.718915549Z" level=info msg="Starting up" Apr 22 23:46:12.721597 dockerd[1853]: time="2026-04-22T23:46:12.721486303Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 22 23:46:12.865324 dockerd[1853]: time="2026-04-22T23:46:12.865123798Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 22 23:46:13.040107 dockerd[1853]: time="2026-04-22T23:46:13.039368159Z" level=info msg="Loading containers: start." Apr 22 23:46:13.061563 kernel: Initializing XFRM netlink socket Apr 22 23:46:13.910693 systemd-networkd[1542]: docker0: Link UP Apr 22 23:46:13.918336 dockerd[1853]: time="2026-04-22T23:46:13.918111077Z" level=info msg="Loading containers: done." Apr 22 23:46:13.962274 dockerd[1853]: time="2026-04-22T23:46:13.961935055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 22 23:46:13.962584 dockerd[1853]: time="2026-04-22T23:46:13.962325957Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 22 23:46:13.962584 dockerd[1853]: time="2026-04-22T23:46:13.962516053Z" level=info msg="Initializing buildkit" Apr 22 23:46:14.047551 dockerd[1853]: time="2026-04-22T23:46:14.047153746Z" level=info msg="Completed buildkit initialization" Apr 22 23:46:14.057966 dockerd[1853]: time="2026-04-22T23:46:14.057059538Z" level=info msg="Daemon has completed initialization" Apr 22 23:46:14.059025 dockerd[1853]: time="2026-04-22T23:46:14.058316223Z" level=info msg="API listen on /run/docker.sock" Apr 22 23:46:14.059920 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 22 23:46:15.235047 containerd[1642]: time="2026-04-22T23:46:15.234600324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 22 23:46:16.436950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493663827.mount: Deactivated successfully. Apr 22 23:46:17.778376 update_engine[1620]: I20260422 23:46:17.772975 1620 update_attempter.cc:509] Updating boot flags... Apr 22 23:46:19.354557 containerd[1642]: time="2026-04-22T23:46:19.353927523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:19.355390 containerd[1642]: time="2026-04-22T23:46:19.354722892Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=29181781" Apr 22 23:46:19.358325 containerd[1642]: time="2026-04-22T23:46:19.358083555Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:19.362348 containerd[1642]: time="2026-04-22T23:46:19.362262436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:19.363267 containerd[1642]: time="2026-04-22T23:46:19.363095984Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 4.128206585s" Apr 22 23:46:19.363267 containerd[1642]: time="2026-04-22T23:46:19.363247276Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 22 23:46:19.365311 containerd[1642]: time="2026-04-22T23:46:19.365236240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 22 23:46:19.859808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 22 23:46:19.869802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:46:24.507757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:46:24.582152 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:46:25.011774 kubelet[2156]: E0422 23:46:25.011056 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:46:25.020564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:46:25.022603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:46:25.023904 systemd[1]: kubelet.service: Consumed 3.491s CPU time, 109M memory peak. Apr 22 23:46:25.437093 containerd[1642]: time="2026-04-22T23:46:25.436826999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:25.439134 containerd[1642]: time="2026-04-22T23:46:25.438256458Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26162658" Apr 22 23:46:25.440771 containerd[1642]: time="2026-04-22T23:46:25.440669121Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:25.453293 containerd[1642]: time="2026-04-22T23:46:25.453042422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:25.455273 containerd[1642]: time="2026-04-22T23:46:25.455006593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 6.089697338s" Apr 22 23:46:25.455273 containerd[1642]: time="2026-04-22T23:46:25.455133849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 22 23:46:25.457003 containerd[1642]: time="2026-04-22T23:46:25.456908069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 22 23:46:31.007927 containerd[1642]: time="2026-04-22T23:46:31.006651708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:31.011047 containerd[1642]: time="2026-04-22T23:46:31.010998957Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20285581" Apr 22 23:46:31.017824 containerd[1642]: time="2026-04-22T23:46:31.017180819Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:31.038304 containerd[1642]: time="2026-04-22T23:46:31.037848539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:31.041831 containerd[1642]: time="2026-04-22T23:46:31.041389434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 5.584341761s" Apr 22 23:46:31.041831 containerd[1642]: time="2026-04-22T23:46:31.041762887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 22 23:46:31.044766 containerd[1642]: time="2026-04-22T23:46:31.044744029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 22 23:46:35.118895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 22 23:46:35.125800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:46:35.257804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425909326.mount: Deactivated successfully. Apr 22 23:46:36.140824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:46:36.160974 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:46:36.936080 kubelet[2184]: E0422 23:46:36.935707 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:46:36.999042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:46:36.999393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:46:37.003680 systemd[1]: kubelet.service: Consumed 1.154s CPU time, 109M memory peak. Apr 22 23:46:40.016553 containerd[1642]: time="2026-04-22T23:46:40.015039777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:40.023048 containerd[1642]: time="2026-04-22T23:46:40.018093156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32006989" Apr 22 23:46:40.037053 containerd[1642]: time="2026-04-22T23:46:40.025099456Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:40.068131 containerd[1642]: time="2026-04-22T23:46:40.067329400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:40.081517 containerd[1642]: time="2026-04-22T23:46:40.080884846Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 9.03604876s" Apr 22 23:46:40.081517 containerd[1642]: time="2026-04-22T23:46:40.081131869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 22 23:46:40.111182 containerd[1642]: time="2026-04-22T23:46:40.110744354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 22 23:46:43.306522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129568974.mount: Deactivated successfully. Apr 22 23:46:47.114853 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 22 23:46:47.138330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:46:47.821219 containerd[1642]: time="2026-04-22T23:46:47.820177592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:47.851178 containerd[1642]: time="2026-04-22T23:46:47.849995510Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931059" Apr 22 23:46:47.859750 containerd[1642]: time="2026-04-22T23:46:47.859584010Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:47.870899 containerd[1642]: time="2026-04-22T23:46:47.869857432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:46:47.872135 containerd[1642]: time="2026-04-22T23:46:47.871234805Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 7.760119857s" Apr 22 23:46:47.872135 containerd[1642]: time="2026-04-22T23:46:47.871359000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 22 23:46:47.874166 containerd[1642]: time="2026-04-22T23:46:47.874062245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 22 23:46:47.962701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:46:48.019663 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:46:48.326163 kubelet[2254]: E0422 23:46:48.324229 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:46:48.346007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:46:48.346350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:46:48.352109 systemd[1]: kubelet.service: Consumed 813ms CPU time, 110.4M memory peak. Apr 22 23:46:48.924564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158715926.mount: Deactivated successfully. Apr 22 23:46:48.951030 containerd[1642]: time="2026-04-22T23:46:48.950649906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:46:48.952015 containerd[1642]: time="2026-04-22T23:46:48.951982496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 23:46:48.954375 containerd[1642]: time="2026-04-22T23:46:48.954161578Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:46:48.957159 containerd[1642]: time="2026-04-22T23:46:48.957045402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:46:48.957636 containerd[1642]: time="2026-04-22T23:46:48.957611888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.083441059s" Apr 22 23:46:48.957636 containerd[1642]: time="2026-04-22T23:46:48.957632459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 22 23:46:48.959319 containerd[1642]: time="2026-04-22T23:46:48.959155001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 22 23:46:51.723951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518012672.mount: Deactivated successfully. Apr 22 23:46:58.507830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 22 23:46:58.621840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:47:03.501839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:47:03.524621 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:47:04.094340 kubelet[2288]: E0422 23:47:04.094085 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:47:04.107370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:47:04.107845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:47:04.114278 systemd[1]: kubelet.service: Consumed 2.946s CPU time, 110.6M memory peak. Apr 22 23:47:14.360246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 22 23:47:14.383641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:47:14.964632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:47:14.987937 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:47:15.825153 kubelet[2342]: E0422 23:47:15.824325 2342 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:47:15.851259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:47:15.851973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:47:15.857675 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 108.7M memory peak. Apr 22 23:47:26.259335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 22 23:47:26.352306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:47:35.489194 containerd[1642]: time="2026-04-22T23:47:35.473609637Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23707771" Apr 22 23:47:35.489194 containerd[1642]: time="2026-04-22T23:47:35.489371203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:47:35.540756 containerd[1642]: time="2026-04-22T23:47:35.538373836Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:47:35.614811 containerd[1642]: time="2026-04-22T23:47:35.613798586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:47:35.617185 containerd[1642]: time="2026-04-22T23:47:35.617091624Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 46.657055089s" Apr 22 23:47:35.625381 containerd[1642]: time="2026-04-22T23:47:35.623143955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 22 23:47:41.646699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:47:41.707093 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:47:42.753720 kubelet[2395]: E0422 23:47:42.753123 2395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:47:42.765322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:47:42.767937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:47:42.782393 systemd[1]: kubelet.service: Consumed 8.452s CPU time, 111.4M memory peak. Apr 22 23:47:53.015076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 22 23:47:53.166142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:47:57.066838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:47:57.122150 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:47:58.220103 kubelet[2414]: E0422 23:47:58.219350 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:47:58.312007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:47:58.312855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:47:58.314201 systemd[1]: kubelet.service: Consumed 2.700s CPU time, 110.5M memory peak. Apr 22 23:48:02.098089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:02.099078 systemd[1]: kubelet.service: Consumed 2.700s CPU time, 110.5M memory peak. Apr 22 23:48:02.186879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:02.682888 systemd[1]: Reload requested from client PID 2430 ('systemctl') (unit session-8.scope)... Apr 22 23:48:02.683056 systemd[1]: Reloading... Apr 22 23:48:04.125230 zram_generator::config[2478]: No configuration found. Apr 22 23:48:11.782911 systemd[1]: Reloading finished in 9093 ms. Apr 22 23:48:12.351384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:12.357290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:12.389343 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:48:12.393303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:12.396320 systemd[1]: kubelet.service: Consumed 933ms CPU time, 98.4M memory peak. Apr 22 23:48:12.418217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:15.699207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:15.732914 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:48:16.632915 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:48:16.640121 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 22 23:48:16.640121 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:48:16.646209 kubelet[2526]: I0422 23:48:16.637356 2526 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 23:48:24.713852 kubelet[2526]: I0422 23:48:24.712327 2526 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 22 23:48:24.713852 kubelet[2526]: I0422 23:48:24.713186 2526 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:48:24.717377 kubelet[2526]: I0422 23:48:24.714378 2526 server.go:956] "Client rotation is on, will bootstrap in background" Apr 22 23:48:25.316942 kubelet[2526]: E0422 23:48:25.315939 2526 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:48:25.337846 kubelet[2526]: I0422 23:48:25.333351 2526 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:48:25.718392 kubelet[2526]: I0422 23:48:25.717350 2526 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:48:26.304146 kubelet[2526]: I0422 23:48:26.302058 2526 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 23:48:26.374102 kubelet[2526]: I0422 23:48:26.367403 2526 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:48:26.375262 kubelet[2526]: I0422 23:48:26.372260 2526 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:48:26.375262 kubelet[2526]: I0422 23:48:26.374860 2526 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 23:48:26.375262 kubelet[2526]: I0422 23:48:26.374957 2526 container_manager_linux.go:303] "Creating device plugin manager" Apr 22 23:48:26.382370 kubelet[2526]: I0422 23:48:26.382056 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:48:26.673368 kubelet[2526]: I0422 23:48:26.662273 2526 kubelet.go:480] "Attempting to sync node with API server" Apr 22 23:48:26.686139 kubelet[2526]: I0422 23:48:26.678347 2526 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:48:26.686139 kubelet[2526]: I0422 23:48:26.686052 2526 kubelet.go:386] "Adding apiserver pod source" Apr 22 23:48:26.702372 kubelet[2526]: I0422 23:48:26.687395 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:48:26.752185 kubelet[2526]: E0422 23:48:26.747392 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:48:26.752185 kubelet[2526]: E0422 23:48:26.747381 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:48:26.820398 kubelet[2526]: I0422 23:48:26.818909 2526 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:48:26.886883 kubelet[2526]: I0422 23:48:26.884365 2526 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:48:26.964946 kubelet[2526]: W0422 23:48:26.960982 2526 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 22 23:48:27.297294 kubelet[2526]: I0422 23:48:27.292255 2526 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 22 23:48:27.302389 kubelet[2526]: I0422 23:48:27.297920 2526 server.go:1289] "Started kubelet" Apr 22 23:48:27.307330 kubelet[2526]: I0422 23:48:27.305917 2526 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:48:27.360233 kubelet[2526]: I0422 23:48:27.350233 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:48:27.397097 kubelet[2526]: I0422 23:48:27.387087 2526 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:48:27.452371 kubelet[2526]: E0422 23:48:27.388296 2526 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:48:27.457955 kubelet[2526]: E0422 23:48:27.388269 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2b038011c34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,LastTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:48:27.582343 kubelet[2526]: I0422 23:48:27.572337 2526 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:48:27.617368 kubelet[2526]: I0422 23:48:27.616701 2526 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:48:27.633294 kubelet[2526]: I0422 23:48:27.632986 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 23:48:27.642198 kubelet[2526]: E0422 23:48:27.641186 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:27.642198 kubelet[2526]: I0422 23:48:27.642055 2526 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 22 23:48:27.653092 kubelet[2526]: I0422 23:48:27.642190 2526 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 22 23:48:27.677173 kubelet[2526]: I0422 23:48:27.653400 2526 reconciler.go:26] "Reconciler: start to sync state" Apr 22 23:48:27.708072 kubelet[2526]: E0422 23:48:27.706145 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:48:27.717293 kubelet[2526]: E0422 23:48:27.703345 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Apr 22 23:48:27.718314 kubelet[2526]: I0422 23:48:27.714382 2526 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:48:27.718314 kubelet[2526]: I0422 23:48:27.718254 2526 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:48:27.746936 kubelet[2526]: E0422 23:48:27.745138 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:27.746936 kubelet[2526]: E0422 23:48:27.746337 2526 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:48:27.850173 kubelet[2526]: I0422 23:48:27.839388 2526 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:48:27.873192 kubelet[2526]: E0422 23:48:27.850181 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:27.878144 kubelet[2526]: E0422 23:48:27.874360 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:48:27.997081 kubelet[2526]: E0422 23:48:27.996210 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.044262 kubelet[2526]: E0422 23:48:28.041015 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:48:28.060052 kubelet[2526]: E0422 23:48:28.059164 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Apr 22 23:48:28.105362 kubelet[2526]: E0422 23:48:28.104842 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.288185 kubelet[2526]: E0422 23:48:28.281226 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.400055 kubelet[2526]: E0422 23:48:28.399028 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.510204 kubelet[2526]: E0422 23:48:28.510018 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Apr 22 23:48:28.591373 kubelet[2526]: E0422 23:48:28.513282 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.649888 kubelet[2526]: I0422 23:48:28.638104 2526 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 22 23:48:28.654236 kubelet[2526]: I0422 23:48:28.651361 2526 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 22 23:48:28.654236 kubelet[2526]: I0422 23:48:28.652103 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:48:28.654236 kubelet[2526]: E0422 23:48:28.653020 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:48:28.692008 kubelet[2526]: E0422 23:48:28.691145 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.695214 kubelet[2526]: I0422 23:48:28.695191 2526 policy_none.go:49] "None policy: Start" Apr 22 23:48:28.695931 kubelet[2526]: I0422 23:48:28.695912 2526 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 22 23:48:28.696324 kubelet[2526]: I0422 23:48:28.696311 2526 state_mem.go:35] "Initializing new in-memory state store" Apr 22 23:48:28.795316 kubelet[2526]: E0422 23:48:28.793385 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.812211 kubelet[2526]: I0422 23:48:28.811989 2526 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 22 23:48:28.858981 kubelet[2526]: I0422 23:48:28.856359 2526 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 22 23:48:28.858981 kubelet[2526]: I0422 23:48:28.857155 2526 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 22 23:48:28.858981 kubelet[2526]: I0422 23:48:28.857377 2526 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:48:28.858981 kubelet[2526]: I0422 23:48:28.857901 2526 kubelet.go:2436] "Starting kubelet main sync loop" Apr 22 23:48:28.858981 kubelet[2526]: E0422 23:48:28.858010 2526 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:48:28.900124 kubelet[2526]: E0422 23:48:28.884387 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:48:28.912274 kubelet[2526]: E0422 23:48:28.908383 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:28.959197 kubelet[2526]: E0422 23:48:28.958309 2526 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:48:28.994277 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 22 23:48:29.086210 kubelet[2526]: E0422 23:48:29.063173 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.175247 kubelet[2526]: E0422 23:48:29.168278 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.175247 kubelet[2526]: E0422 23:48:29.169168 2526 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:48:29.278007 kubelet[2526]: E0422 23:48:29.276249 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.399263 kubelet[2526]: E0422 23:48:29.398974 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.400335 kubelet[2526]: E0422 23:48:29.399943 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Apr 22 23:48:29.439923 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 22 23:48:29.574224 kubelet[2526]: E0422 23:48:29.569350 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.592172 kubelet[2526]: E0422 23:48:29.579319 2526 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:48:29.672132 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 22 23:48:29.722268 kubelet[2526]: E0422 23:48:29.720379 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.788185 kubelet[2526]: E0422 23:48:29.786292 2526 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:48:29.797290 kubelet[2526]: I0422 23:48:29.796360 2526 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 22 23:48:29.809360 kubelet[2526]: I0422 23:48:29.808187 2526 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:48:29.833283 kubelet[2526]: E0422 23:48:29.823987 2526 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:48:29.862236 kubelet[2526]: I0422 23:48:29.860255 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 23:48:30.097303 kubelet[2526]: E0422 23:48:30.085078 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:48:30.186125 kubelet[2526]: E0422 23:48:30.185998 2526 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:48:30.205393 kubelet[2526]: E0422 23:48:30.205190 2526 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:48:30.285348 kubelet[2526]: I0422 23:48:30.285081 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:30.448251 kubelet[2526]: E0422 23:48:30.447871 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:30.602980 kubelet[2526]: E0422 23:48:30.602096 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:48:30.607372 kubelet[2526]: I0422 23:48:30.606342 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:48:30.607372 kubelet[2526]: I0422 23:48:30.607191 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:48:30.624010 kubelet[2526]: I0422 23:48:30.607993 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:48:30.820973 kubelet[2526]: E0422 23:48:30.818108 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:48:30.847325 kubelet[2526]: I0422 23:48:30.846318 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:30.888294 kubelet[2526]: E0422 23:48:30.888040 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:30.906044 kubelet[2526]: I0422 23:48:30.902214 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:48:30.908231 kubelet[2526]: I0422 23:48:30.908202 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:48:30.914163 kubelet[2526]: I0422 23:48:30.914087 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:48:30.935394 kubelet[2526]: I0422 23:48:30.924054 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:48:30.935394 kubelet[2526]: I0422 23:48:30.924227 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:48:31.066861 kubelet[2526]: I0422 23:48:31.064153 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:48:31.069217 kubelet[2526]: E0422 23:48:31.068214 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Apr 22 23:48:31.210018 kubelet[2526]: E0422 23:48:31.202346 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:48:31.490390 kubelet[2526]: I0422 23:48:31.486133 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:31.486994 systemd[1]: Created slice kubepods-burstable-pod842680e0fe04fac74a262ce349051ab2.slice - libcontainer container kubepods-burstable-pod842680e0fe04fac74a262ce349051ab2.slice. Apr 22 23:48:31.522325 kubelet[2526]: E0422 23:48:31.521995 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:31.641935 kubelet[2526]: E0422 23:48:31.639955 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:31.654847 kubelet[2526]: E0422 23:48:31.654013 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:31.705156 containerd[1642]: time="2026-04-22T23:48:31.688325117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:842680e0fe04fac74a262ce349051ab2,Namespace:kube-system,Attempt:0,}" Apr 22 23:48:31.758957 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 22 23:48:31.866132 kubelet[2526]: E0422 23:48:31.865342 2526 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:48:31.910057 kubelet[2526]: E0422 23:48:31.896314 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:31.952400 kubelet[2526]: E0422 23:48:31.950398 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:48:31.962928 kubelet[2526]: E0422 23:48:31.962370 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:31.985878 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 22 23:48:31.991369 containerd[1642]: time="2026-04-22T23:48:31.991325588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 22 23:48:32.450072 kubelet[2526]: E0422 23:48:32.440376 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:32.716303 kubelet[2526]: E0422 23:48:32.701940 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:32.744902 containerd[1642]: time="2026-04-22T23:48:32.740915549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 22 23:48:32.800365 kubelet[2526]: I0422 23:48:32.800253 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:32.851230 kubelet[2526]: E0422 23:48:32.849304 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:34.490311 kubelet[2526]: E0422 23:48:34.490182 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="6.4s" Apr 22 23:48:34.623321 kubelet[2526]: I0422 23:48:34.622346 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:34.693187 kubelet[2526]: E0422 23:48:34.692218 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:34.719357 containerd[1642]: time="2026-04-22T23:48:34.714002091Z" level=info msg="connecting to shim 3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a" address="unix:///run/containerd/s/3dfe3d6323e8f8f75559a13206af03e2868c85f01fbf4abe1aaa822e71aa8528" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:48:34.734340 containerd[1642]: time="2026-04-22T23:48:34.719242756Z" level=info msg="connecting to shim 9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd" address="unix:///run/containerd/s/52c20a04aff33a832cf09fe18f3d8420202bcd603b0b0170557108773320997c" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:48:34.739038 containerd[1642]: time="2026-04-22T23:48:34.737166789Z" level=info msg="connecting to shim 82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8" address="unix:///run/containerd/s/f1b340a8490173f1517272e66ae28aed4af3511da87640b636f04a9a08cfd7fa" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:48:35.133250 kubelet[2526]: E0422 23:48:35.124275 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:48:35.509333 kubelet[2526]: E0422 23:48:35.506355 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:48:36.459871 kubelet[2526]: E0422 23:48:36.455305 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2b038011c34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,LastTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:48:36.518955 kubelet[2526]: E0422 23:48:36.518231 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:48:37.076334 kubelet[2526]: E0422 23:48:37.058974 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:48:38.249384 systemd[1]: Started cri-containerd-3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a.scope - libcontainer container 3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a. Apr 22 23:48:38.400894 systemd[1]: Started cri-containerd-82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8.scope - libcontainer container 82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8. Apr 22 23:48:38.464828 kubelet[2526]: I0422 23:48:38.464368 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:38.603787 kubelet[2526]: E0422 23:48:38.575782 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 22 23:48:38.683178 systemd[1]: Started cri-containerd-9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd.scope - libcontainer container 9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd. Apr 22 23:48:39.992252 containerd[1642]: time="2026-04-22T23:48:39.987399531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\"" Apr 22 23:48:40.168809 kubelet[2526]: E0422 23:48:40.165326 2526 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:48:40.235823 kubelet[2526]: E0422 23:48:40.235194 2526 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:48:40.236195 containerd[1642]: time="2026-04-22T23:48:40.221027239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\"" Apr 22 23:48:40.238177 containerd[1642]: time="2026-04-22T23:48:40.237000803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:842680e0fe04fac74a262ce349051ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8\"" Apr 22 23:48:40.251038 kubelet[2526]: E0422 23:48:40.247037 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:40.251038 kubelet[2526]: E0422 23:48:40.247849 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:40.257851 kubelet[2526]: E0422 23:48:40.257802 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:40.489138 containerd[1642]: time="2026-04-22T23:48:40.485755121Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 22 23:48:40.499168 containerd[1642]: time="2026-04-22T23:48:40.484307468Z" level=info msg="CreateContainer within sandbox \"82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 22 23:48:40.663128 containerd[1642]: time="2026-04-22T23:48:40.662887778Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 22 23:48:40.965685 kubelet[2526]: E0422 23:48:40.961919 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="7s" Apr 22 23:48:41.244795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182919288.mount: Deactivated successfully. Apr 22 23:48:41.249262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount212141475.mount: Deactivated successfully. Apr 22 23:48:41.258115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545738167.mount: Deactivated successfully. Apr 22 23:48:41.291238 containerd[1642]: time="2026-04-22T23:48:41.287252235Z" level=info msg="Container f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:48:41.306142 containerd[1642]: time="2026-04-22T23:48:41.301353811Z" level=info msg="Container f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:48:41.454879 containerd[1642]: time="2026-04-22T23:48:41.454311755Z" level=info msg="Container 0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:48:41.540259 containerd[1642]: time="2026-04-22T23:48:41.537269969Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\"" Apr 22 23:48:41.562828 containerd[1642]: time="2026-04-22T23:48:41.562140701Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\"" Apr 22 23:48:41.609094 containerd[1642]: time="2026-04-22T23:48:41.607130372Z" level=info msg="StartContainer for \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\"" Apr 22 23:48:41.617775 containerd[1642]: time="2026-04-22T23:48:41.608276891Z" level=info msg="StartContainer for \"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\"" Apr 22 23:48:41.667895 containerd[1642]: time="2026-04-22T23:48:41.666153209Z" level=info msg="CreateContainer within sandbox \"82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef\"" Apr 22 23:48:41.852734 containerd[1642]: time="2026-04-22T23:48:41.851998708Z" level=info msg="connecting to shim f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1" address="unix:///run/containerd/s/52c20a04aff33a832cf09fe18f3d8420202bcd603b0b0170557108773320997c" protocol=ttrpc version=3 Apr 22 23:48:41.852734 containerd[1642]: time="2026-04-22T23:48:41.852207232Z" level=info msg="StartContainer for \"0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef\"" Apr 22 23:48:41.858728 containerd[1642]: time="2026-04-22T23:48:41.858347591Z" level=info msg="connecting to shim f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65" address="unix:///run/containerd/s/3dfe3d6323e8f8f75559a13206af03e2868c85f01fbf4abe1aaa822e71aa8528" protocol=ttrpc version=3 Apr 22 23:48:42.005784 containerd[1642]: time="2026-04-22T23:48:42.004989294Z" level=info msg="connecting to shim 0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef" address="unix:///run/containerd/s/f1b340a8490173f1517272e66ae28aed4af3511da87640b636f04a9a08cfd7fa" protocol=ttrpc version=3 Apr 22 23:48:42.419248 kubelet[2526]: E0422 23:48:42.419165 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:48:42.782396 systemd[1]: Started cri-containerd-0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef.scope - libcontainer container 0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef. Apr 22 23:48:42.822355 kubelet[2526]: E0422 23:48:42.816972 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:48:42.836795 systemd[1]: Started cri-containerd-f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1.scope - libcontainer container f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1. Apr 22 23:48:42.902845 systemd[1]: Started cri-containerd-f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65.scope - libcontainer container f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65. Apr 22 23:48:44.121889 containerd[1642]: time="2026-04-22T23:48:44.115396297Z" level=info msg="StartContainer for \"0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef\" returns successfully" Apr 22 23:48:44.515222 containerd[1642]: time="2026-04-22T23:48:44.486992684Z" level=info msg="StartContainer for \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\" returns successfully" Apr 22 23:48:44.536825 containerd[1642]: time="2026-04-22T23:48:44.535348192Z" level=info msg="StartContainer for \"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\" returns successfully" Apr 22 23:48:44.588005 kubelet[2526]: E0422 23:48:44.578334 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:48:46.605387 kubelet[2526]: I0422 23:48:46.592906 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:48:48.301087 kubelet[2526]: E0422 23:48:48.299308 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:48.301087 kubelet[2526]: E0422 23:48:48.299965 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:49.913235 kubelet[2526]: E0422 23:48:49.906120 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:49.973097 kubelet[2526]: E0422 23:48:49.968143 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:50.263182 kubelet[2526]: E0422 23:48:50.253834 2526 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:48:52.374282 kubelet[2526]: E0422 23:48:52.372348 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:52.586866 kubelet[2526]: E0422 23:48:52.586748 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:54.922320 kubelet[2526]: E0422 23:48:54.922095 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:55.200947 kubelet[2526]: E0422 23:48:55.147270 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:55.200947 kubelet[2526]: E0422 23:48:55.147288 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:55.646317 kubelet[2526]: E0422 23:48:55.619237 2526 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:48:55.822279 kubelet[2526]: E0422 23:48:55.820210 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:56.190779 kubelet[2526]: E0422 23:48:56.190235 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:56.191363 kubelet[2526]: E0422 23:48:56.191258 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:57.117057 kubelet[2526]: E0422 23:48:57.114771 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2b038011c34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,LastTimestamp:2026-04-22 23:48:27.29333458 +0000 UTC m=+11.501556161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:48:57.159226 kubelet[2526]: E0422 23:48:57.121258 2526 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:48:57.551161 kubelet[2526]: E0422 23:48:57.547301 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:57.551161 kubelet[2526]: E0422 23:48:57.548327 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:57.555245 kubelet[2526]: E0422 23:48:57.554240 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:57.567070 kubelet[2526]: E0422 23:48:57.566971 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:48:58.325255 kubelet[2526]: E0422 23:48:58.289294 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:48:58.946367 kubelet[2526]: E0422 23:48:58.943179 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:48:59.157000 kubelet[2526]: E0422 23:48:59.153107 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:00.268144 kubelet[2526]: E0422 23:49:00.267390 2526 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:49:02.317327 kubelet[2526]: E0422 23:49:02.316779 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:49:02.365279 kubelet[2526]: E0422 23:49:02.364828 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:09.553124 kubelet[2526]: E0422 23:49:09.552375 2526 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:49:09.783892 kubelet[2526]: E0422 23:49:09.552944 2526 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 22 23:49:10.017813 kubelet[2526]: I0422 23:49:09.706346 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:49:13.208252 kubelet[2526]: E0422 23:49:13.207093 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:14.991162 kubelet[2526]: E0422 23:49:14.990972 2526 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:49:15.279179 kubelet[2526]: I0422 23:49:15.265935 2526 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 22 23:49:15.279179 kubelet[2526]: E0422 23:49:15.266206 2526 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 22 23:49:18.333224 kubelet[2526]: I0422 23:49:18.329155 2526 apiserver.go:52] "Watching apiserver" Apr 22 23:49:18.421323 kubelet[2526]: I0422 23:49:18.404016 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 22 23:49:19.457952 kubelet[2526]: I0422 23:49:19.455310 2526 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 22 23:49:20.456233 kubelet[2526]: I0422 23:49:20.456078 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:20.480072 kubelet[2526]: E0422 23:49:20.459373 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:20.683148 kubelet[2526]: I0422 23:49:20.681929 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 22 23:49:21.251187 kubelet[2526]: E0422 23:49:21.246288 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:23.863893 kubelet[2526]: E0422 23:49:23.859157 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.611s" Apr 22 23:49:24.086002 kubelet[2526]: E0422 23:49:24.084115 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:33.192053 kubelet[2526]: E0422 23:49:33.191851 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.187s" Apr 22 23:49:33.737823 kubelet[2526]: I0422 23:49:33.722792 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=14.722352247 podStartE2EDuration="14.722352247s" podCreationTimestamp="2026-04-22 23:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:49:33.208972058 +0000 UTC m=+77.417193638" watchObservedRunningTime="2026-04-22 23:49:33.722352247 +0000 UTC m=+77.930573827" Apr 22 23:49:33.964347 kubelet[2526]: I0422 23:49:33.959987 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=13.959850924 podStartE2EDuration="13.959850924s" podCreationTimestamp="2026-04-22 23:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:49:33.758307793 +0000 UTC m=+77.966529374" watchObservedRunningTime="2026-04-22 23:49:33.959850924 +0000 UTC m=+78.168072500" Apr 22 23:49:35.893186 kubelet[2526]: E0422 23:49:35.885005 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.652s" Apr 22 23:49:40.609927 kubelet[2526]: E0422 23:49:40.583379 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.631s" Apr 22 23:49:44.707065 systemd[1]: Reload requested from client PID 2831 ('systemctl') (unit session-8.scope)... Apr 22 23:49:44.707082 systemd[1]: Reloading... Apr 22 23:49:47.784348 kubelet[2526]: E0422 23:49:47.784098 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.884s" Apr 22 23:49:56.739231 zram_generator::config[2883]: No configuration found. Apr 22 23:50:21.008052 systemd[1]: Reloading finished in 36241 ms. Apr 22 23:50:21.121213 kubelet[2526]: E0422 23:50:21.119056 2526 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.901s" Apr 22 23:50:24.367986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:26.092262 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:50:26.155156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:26.169326 systemd[1]: kubelet.service: Consumed 1min 8.750s CPU time, 141.6M memory peak. Apr 22 23:50:26.514259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:50:40.689344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:50:41.380677 (kubelet)[2933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:50:46.238785 kubelet[2933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:50:46.287345 kubelet[2933]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 22 23:50:46.287345 kubelet[2933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:50:46.287345 kubelet[2933]: I0422 23:50:46.245106 2933 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 23:50:47.567923 kubelet[2933]: I0422 23:50:47.556179 2933 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 22 23:50:47.567923 kubelet[2933]: I0422 23:50:47.563367 2933 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:50:47.761575 kubelet[2933]: I0422 23:50:47.636368 2933 server.go:956] "Client rotation is on, will bootstrap in background" Apr 22 23:50:48.002299 kubelet[2933]: I0422 23:50:48.001609 2933 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 22 23:50:48.228048 kubelet[2933]: I0422 23:50:48.225103 2933 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:50:48.662912 kubelet[2933]: I0422 23:50:48.662606 2933 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:50:49.083090 kubelet[2933]: I0422 23:50:49.076722 2933 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 23:50:49.179382 kubelet[2933]: I0422 23:50:49.176539 2933 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:50:49.183048 kubelet[2933]: I0422 23:50:49.180386 2933 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:50:49.183048 kubelet[2933]: I0422 23:50:49.182755 2933 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 23:50:49.183048 kubelet[2933]: I0422 23:50:49.182825 2933 container_manager_linux.go:303] "Creating device plugin manager" Apr 22 23:50:49.189212 kubelet[2933]: I0422 23:50:49.187130 2933 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:50:49.191899 kubelet[2933]: I0422 23:50:49.189304 2933 kubelet.go:480] "Attempting to sync node with API server" Apr 22 23:50:49.204121 kubelet[2933]: I0422 23:50:49.192194 2933 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:50:49.204121 kubelet[2933]: I0422 23:50:49.192360 2933 kubelet.go:386] "Adding apiserver pod source" Apr 22 23:50:49.204121 kubelet[2933]: I0422 23:50:49.194204 2933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:50:49.414267 kubelet[2933]: I0422 23:50:49.414193 2933 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:50:49.432391 kubelet[2933]: I0422 23:50:49.432178 2933 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:50:49.637280 kubelet[2933]: I0422 23:50:49.637006 2933 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 22 23:50:49.641386 kubelet[2933]: I0422 23:50:49.641131 2933 server.go:1289] "Started kubelet" Apr 22 23:50:49.676024 kubelet[2933]: I0422 23:50:49.656190 2933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:50:49.789113 kubelet[2933]: I0422 23:50:49.787086 2933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 23:50:49.941067 kubelet[2933]: I0422 23:50:49.937709 2933 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:50:49.950091 kubelet[2933]: I0422 23:50:49.948003 2933 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 22 23:50:49.950091 kubelet[2933]: I0422 23:50:49.948185 2933 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:50:50.084009 kubelet[2933]: I0422 23:50:50.078160 2933 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 22 23:50:50.138516 kubelet[2933]: I0422 23:50:50.133064 2933 reconciler.go:26] "Reconciler: start to sync state" Apr 22 23:50:50.138516 kubelet[2933]: I0422 23:50:50.135464 2933 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:50:50.304045 kubelet[2933]: I0422 23:50:50.302722 2933 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:50:50.353313 kubelet[2933]: I0422 23:50:50.352993 2933 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:50:50.457579 kubelet[2933]: I0422 23:50:50.443072 2933 apiserver.go:52] "Watching apiserver" Apr 22 23:50:50.566949 kubelet[2933]: I0422 23:50:50.562104 2933 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:50:51.200534 kubelet[2933]: E0422 23:50:51.199267 2933 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:50:51.250982 kubelet[2933]: I0422 23:50:51.249210 2933 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 22 23:50:51.360241 kubelet[2933]: I0422 23:50:51.358293 2933 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:50:51.388343 kubelet[2933]: I0422 23:50:51.388211 2933 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 22 23:50:51.388343 kubelet[2933]: I0422 23:50:51.388354 2933 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 22 23:50:51.389997 kubelet[2933]: I0422 23:50:51.389808 2933 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:50:51.389997 kubelet[2933]: I0422 23:50:51.389851 2933 kubelet.go:2436] "Starting kubelet main sync loop" Apr 22 23:50:51.458131 kubelet[2933]: E0422 23:50:51.452026 2933 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:50:51.726846 kubelet[2933]: E0422 23:50:51.717391 2933 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:50:52.006065 kubelet[2933]: E0422 23:50:52.004160 2933 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:50:52.414836 kubelet[2933]: E0422 23:50:52.414459 2933 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:50:53.228194 kubelet[2933]: E0422 23:50:53.226263 2933 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:50:54.677971 kubelet[2933]: I0422 23:50:54.677806 2933 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 22 23:50:54.677971 kubelet[2933]: I0422 23:50:54.678036 2933 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 22 23:50:54.687190 kubelet[2933]: I0422 23:50:54.680297 2933 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:50:54.691480 kubelet[2933]: I0422 23:50:54.690323 2933 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 22 23:50:54.692255 kubelet[2933]: I0422 23:50:54.691794 2933 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 22 23:50:54.692255 kubelet[2933]: I0422 23:50:54.691963 2933 policy_none.go:49] "None policy: Start" Apr 22 23:50:54.692255 kubelet[2933]: I0422 23:50:54.692009 2933 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 22 23:50:54.692255 kubelet[2933]: I0422 23:50:54.692053 2933 state_mem.go:35] "Initializing new in-memory state store" Apr 22 23:50:54.692526 kubelet[2933]: I0422 23:50:54.692481 2933 state_mem.go:75] "Updated machine memory state" Apr 22 23:50:54.845080 kubelet[2933]: E0422 23:50:54.844836 2933 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:50:55.167674 kubelet[2933]: E0422 23:50:55.166203 2933 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:50:55.185487 kubelet[2933]: I0422 23:50:55.183756 2933 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 22 23:50:55.185487 kubelet[2933]: I0422 23:50:55.185114 2933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:50:55.231953 kubelet[2933]: I0422 23:50:55.231869 2933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 23:50:55.646630 kubelet[2933]: I0422 23:50:55.646542 2933 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 22 23:50:55.687508 kubelet[2933]: E0422 23:50:55.685144 2933 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:50:55.702162 containerd[1642]: time="2026-04-22T23:50:55.685546380Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 22 23:50:55.853498 kubelet[2933]: I0422 23:50:55.702189 2933 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 22 23:50:56.435989 kubelet[2933]: I0422 23:50:56.433371 2933 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:50:57.030993 kubelet[2933]: I0422 23:50:57.030699 2933 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 22 23:50:57.102688 kubelet[2933]: I0422 23:50:57.102348 2933 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 22 23:50:58.082877 kubelet[2933]: I0422 23:50:58.082486 2933 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.157959 kubelet[2933]: I0422 23:50:58.156041 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79526fa3-4fea-45ab-95ea-4889982ff475-kube-proxy\") pod \"kube-proxy-sxdpp\" (UID: \"79526fa3-4fea-45ab-95ea-4889982ff475\") " pod="kube-system/kube-proxy-sxdpp" Apr 22 23:50:58.303826 kubelet[2933]: I0422 23:50:58.303522 2933 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 22 23:50:58.367240 kubelet[2933]: I0422 23:50:58.356195 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79526fa3-4fea-45ab-95ea-4889982ff475-xtables-lock\") pod \"kube-proxy-sxdpp\" (UID: \"79526fa3-4fea-45ab-95ea-4889982ff475\") " pod="kube-system/kube-proxy-sxdpp" Apr 22 23:50:58.525097 kubelet[2933]: E0422 23:50:58.516350 2933 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.525097 kubelet[2933]: I0422 23:50:58.466311 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79526fa3-4fea-45ab-95ea-4889982ff475-lib-modules\") pod \"kube-proxy-sxdpp\" (UID: \"79526fa3-4fea-45ab-95ea-4889982ff475\") " pod="kube-system/kube-proxy-sxdpp" Apr 22 23:50:58.525097 kubelet[2933]: I0422 23:50:58.520794 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.525097 kubelet[2933]: I0422 23:50:58.520957 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:50:58.543188 kubelet[2933]: I0422 23:50:58.529374 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:50:58.543188 kubelet[2933]: I0422 23:50:58.529704 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dz9\" (UniqueName: \"kubernetes.io/projected/79526fa3-4fea-45ab-95ea-4889982ff475-kube-api-access-r4dz9\") pod \"kube-proxy-sxdpp\" (UID: \"79526fa3-4fea-45ab-95ea-4889982ff475\") " pod="kube-system/kube-proxy-sxdpp" Apr 22 23:50:58.543188 kubelet[2933]: I0422 23:50:58.529724 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.543188 kubelet[2933]: I0422 23:50:58.529739 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.543188 kubelet[2933]: I0422 23:50:58.529751 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.543327 kubelet[2933]: I0422 23:50:58.529762 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:58.543327 kubelet[2933]: I0422 23:50:58.529803 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:50:58.543327 kubelet[2933]: I0422 23:50:58.529838 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/842680e0fe04fac74a262ce349051ab2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"842680e0fe04fac74a262ce349051ab2\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:50:59.358093 kubelet[2933]: E0422 23:50:59.348347 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:59.500307 kubelet[2933]: E0422 23:50:59.478269 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:59.547610 systemd[1]: Created slice kubepods-besteffort-pod79526fa3_4fea_45ab_95ea_4889982ff475.slice - libcontainer container kubepods-besteffort-pod79526fa3_4fea_45ab_95ea_4889982ff475.slice. Apr 22 23:50:59.585353 kubelet[2933]: E0422 23:50:59.550038 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:01.650134 systemd[1726]: Created slice background.slice - User Background Tasks Slice. Apr 22 23:51:01.989257 kubelet[2933]: E0422 23:51:01.686891 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:01.989257 kubelet[2933]: E0422 23:51:01.714606 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.13s" Apr 22 23:51:02.054336 systemd[1726]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 22 23:51:03.108775 kubelet[2933]: E0422 23:51:03.106278 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:03.389237 containerd[1642]: time="2026-04-22T23:51:03.385217075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxdpp,Uid:79526fa3-4fea-45ab-95ea-4889982ff475,Namespace:kube-system,Attempt:0,}" Apr 22 23:51:03.442160 kubelet[2933]: E0422 23:51:03.440080 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:04.454721 kubelet[2933]: E0422 23:51:04.438367 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.723s" Apr 22 23:51:04.464498 systemd[1726]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 22 23:51:05.439907 kubelet[2933]: E0422 23:51:05.434165 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:05.908103 kubelet[2933]: E0422 23:51:05.825343 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:08.461081 kubelet[2933]: E0422 23:51:08.460925 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.416s" Apr 22 23:51:08.470355 kubelet[2933]: E0422 23:51:08.461255 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:10.345090 kubelet[2933]: E0422 23:51:10.344987 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.881s" Apr 22 23:51:10.876511 kubelet[2933]: E0422 23:51:10.876314 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:11.479133 kubelet[2933]: E0422 23:51:11.478999 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:13.253673 kubelet[2933]: E0422 23:51:13.232301 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.69s" Apr 22 23:51:13.570295 containerd[1642]: time="2026-04-22T23:51:13.037384208Z" level=info msg="connecting to shim c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067" address="unix:///run/containerd/s/a5652392bd484c2ec16626a3c605dc947d366cd6e59ce8d00b2ee23d3ac5ced3" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:51:13.733045 kubelet[2933]: E0422 23:51:13.729252 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:17.288031 kubelet[2933]: E0422 23:51:17.282877 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:24.623199 kubelet[2933]: E0422 23:51:24.620165 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.278s" Apr 22 23:51:25.295560 systemd[1]: Started cri-containerd-c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067.scope - libcontainer container c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067. Apr 22 23:51:35.970389 containerd[1642]: time="2026-04-22T23:51:35.963127473Z" level=error msg="get state for c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067" error="context deadline exceeded" Apr 22 23:51:36.197350 containerd[1642]: time="2026-04-22T23:51:35.974356146Z" level=warning msg="unknown status" status=0 Apr 22 23:51:40.762031 containerd[1642]: time="2026-04-22T23:51:40.753393671Z" level=error msg="get state for c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067" error="context deadline exceeded" Apr 22 23:51:40.885105 containerd[1642]: time="2026-04-22T23:51:40.809079281Z" level=warning msg="unknown status" status=0 Apr 22 23:51:41.835054 kubelet[2933]: E0422 23:51:41.456303 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.51s" Apr 22 23:51:45.645206 containerd[1642]: time="2026-04-22T23:51:45.350390687Z" level=error msg="get state for c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067" error="context deadline exceeded" Apr 22 23:51:45.692787 containerd[1642]: time="2026-04-22T23:51:45.692657130Z" level=warning msg="unknown status" status=0 Apr 22 23:51:48.187251 kubelet[2933]: E0422 23:51:48.098694 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.855s" Apr 22 23:51:49.389324 containerd[1642]: time="2026-04-22T23:51:49.384136091Z" level=error msg="get state for c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067" error="context deadline exceeded" Apr 22 23:51:49.493146 containerd[1642]: time="2026-04-22T23:51:49.403385685Z" level=warning msg="unknown status" status=0 Apr 22 23:51:49.798790 containerd[1642]: time="2026-04-22T23:51:49.777361893Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 22 23:51:49.828510 containerd[1642]: time="2026-04-22T23:51:49.827380447Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 22 23:51:49.828510 containerd[1642]: time="2026-04-22T23:51:49.828021513Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 22 23:51:49.828510 containerd[1642]: time="2026-04-22T23:51:49.828111868Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:51:50.134048 kubelet[2933]: E0422 23:51:50.102990 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.854s" Apr 22 23:51:51.260113 containerd[1642]: time="2026-04-22T23:51:51.258176175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxdpp,Uid:79526fa3-4fea-45ab-95ea-4889982ff475,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067\"" Apr 22 23:51:51.679389 kubelet[2933]: E0422 23:51:51.668398 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:54.648117 kubelet[2933]: E0422 23:51:54.641030 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.249s" Apr 22 23:51:54.705057 containerd[1642]: time="2026-04-22T23:51:54.704895108Z" level=info msg="CreateContainer within sandbox \"c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 22 23:51:57.165303 kubelet[2933]: E0422 23:51:56.987533 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.527s" Apr 22 23:51:58.844722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140836844.mount: Deactivated successfully. Apr 22 23:51:59.553835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673264258.mount: Deactivated successfully. Apr 22 23:51:59.640374 containerd[1642]: time="2026-04-22T23:51:59.636118709Z" level=info msg="Container 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:01.137340 kubelet[2933]: E0422 23:52:01.130885 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.666s" Apr 22 23:52:02.691111 containerd[1642]: time="2026-04-22T23:52:02.677330243Z" level=info msg="CreateContainer within sandbox \"c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817\"" Apr 22 23:52:03.373783 containerd[1642]: time="2026-04-22T23:52:03.365270451Z" level=info msg="StartContainer for \"920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817\"" Apr 22 23:52:03.389565 kubelet[2933]: E0422 23:52:03.387319 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.256s" Apr 22 23:52:05.407900 kubelet[2933]: E0422 23:52:05.407705 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.974s" Apr 22 23:52:05.445072 containerd[1642]: time="2026-04-22T23:52:05.378891709Z" level=info msg="connecting to shim 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817" address="unix:///run/containerd/s/a5652392bd484c2ec16626a3c605dc947d366cd6e59ce8d00b2ee23d3ac5ced3" protocol=ttrpc version=3 Apr 22 23:52:06.822706 kubelet[2933]: E0422 23:52:06.822335 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.414s" Apr 22 23:52:10.226391 systemd[1]: Started cri-containerd-920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817.scope - libcontainer container 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817. Apr 22 23:52:18.761267 kubelet[2933]: E0422 23:52:18.678072 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.084s" Apr 22 23:52:19.325282 sudo[1832]: pam_unix(sudo:session): session closed for user root Apr 22 23:52:19.610237 sshd[1831]: Connection closed by 10.0.0.1 port 47730 Apr 22 23:52:19.642657 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Apr 22 23:52:19.898876 containerd[1642]: time="2026-04-22T23:52:19.815182468Z" level=error msg="get state for 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817" error="context deadline exceeded" Apr 22 23:52:19.898876 containerd[1642]: time="2026-04-22T23:52:19.815282333Z" level=warning msg="unknown status" status=0 Apr 22 23:52:20.446194 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:47730.service: Deactivated successfully. Apr 22 23:52:20.662849 kubelet[2933]: E0422 23:52:20.652813 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.879s" Apr 22 23:52:20.802558 systemd[1]: session-8.scope: Deactivated successfully. Apr 22 23:52:20.845179 systemd[1]: session-8.scope: Consumed 1min 8.846s CPU time, 230.1M memory peak. Apr 22 23:52:21.222298 systemd-logind[1615]: Session 8 logged out. Waiting for processes to exit. Apr 22 23:52:21.548081 systemd-logind[1615]: Removed session 8. Apr 22 23:52:22.409754 kubelet[2933]: E0422 23:52:22.407931 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.747s" Apr 22 23:52:22.908541 kubelet[2933]: I0422 23:52:22.908197 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-cni-plugin\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:22.960237 kubelet[2933]: I0422 23:52:22.942996 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-xtables-lock\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:22.960237 kubelet[2933]: I0422 23:52:22.943095 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpgt\" (UniqueName: \"kubernetes.io/projected/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-kube-api-access-fbpgt\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:22.960237 kubelet[2933]: I0422 23:52:22.943169 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-run\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:22.960237 kubelet[2933]: I0422 23:52:22.943183 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-cni\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:23.028125 kubelet[2933]: I0422 23:52:22.979069 2933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-flannel-cfg\") pod \"kube-flannel-ds-wttf5\" (UID: \"cafe337f-d6a9-4ed4-8582-b10d21e57fb6\") " pod="kube-flannel/kube-flannel-ds-wttf5" Apr 22 23:52:23.275264 containerd[1642]: time="2026-04-22T23:52:23.262774880Z" level=error msg="get state for 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817" error="context deadline exceeded" Apr 22 23:52:23.275264 containerd[1642]: time="2026-04-22T23:52:23.263160006Z" level=warning msg="unknown status" status=0 Apr 22 23:52:23.867190 kubelet[2933]: E0422 23:52:23.780387 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 22 23:52:24.092784 systemd[1]: Created slice kubepods-burstable-podcafe337f_d6a9_4ed4_8582_b10d21e57fb6.slice - libcontainer container kubepods-burstable-podcafe337f_d6a9_4ed4_8582_b10d21e57fb6.slice. Apr 22 23:52:24.927542 kubelet[2933]: E0422 23:52:24.927104 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.043s" Apr 22 23:52:25.154307 kubelet[2933]: E0422 23:52:25.154058 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:25.190072 containerd[1642]: time="2026-04-22T23:52:25.188862497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wttf5,Uid:cafe337f-d6a9-4ed4-8582-b10d21e57fb6,Namespace:kube-flannel,Attempt:0,}" Apr 22 23:52:25.902671 containerd[1642]: time="2026-04-22T23:52:25.873197859Z" level=error msg="get state for 920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817" error="context deadline exceeded" Apr 22 23:52:25.909220 containerd[1642]: time="2026-04-22T23:52:25.903044710Z" level=warning msg="unknown status" status=0 Apr 22 23:52:27.050377 containerd[1642]: time="2026-04-22T23:52:27.030827320Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 22 23:52:27.050377 containerd[1642]: time="2026-04-22T23:52:27.031532395Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:52:27.050377 containerd[1642]: time="2026-04-22T23:52:27.031555198Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 22 23:52:27.220216 kubelet[2933]: E0422 23:52:27.220126 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.829s" Apr 22 23:52:28.441006 kubelet[2933]: E0422 23:52:28.440080 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.031s" Apr 22 23:52:47.684369 containerd[1642]: time="2026-04-22T23:52:46.954105551Z" level=info msg="connecting to shim 14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2" address="unix:///run/containerd/s/a35e04226de6ecfa8cf22490a46d8d2b80aaf1006f85bc829722ce185d694334" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:48.247884 containerd[1642]: time="2026-04-22T23:52:48.243053945Z" level=info msg="StartContainer for \"920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817\" returns successfully" Apr 22 23:52:51.093041 kubelet[2933]: E0422 23:52:51.084226 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.5s" Apr 22 23:52:54.843566 kubelet[2933]: E0422 23:52:54.833367 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:55.317889 kubelet[2933]: E0422 23:52:55.316965 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:56.063822 kubelet[2933]: E0422 23:52:56.063155 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:57.918905 kubelet[2933]: E0422 23:52:57.915832 2933 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 22 23:53:07.719735 kubelet[2933]: E0422 23:53:07.719629 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:53:08.357269 kubelet[2933]: E0422 23:53:08.335869 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.294s" Apr 22 23:53:14.818303 systemd[1]: Started cri-containerd-14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2.scope - libcontainer container 14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2. Apr 22 23:53:34.508355 kubelet[2933]: E0422 23:53:34.492310 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:53:37.215386 kubelet[2933]: E0422 23:53:37.215096 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 22 23:53:40.347024 containerd[1642]: time="2026-04-22T23:53:40.145870949Z" level=info msg="container event discarded" container=3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a type=CONTAINER_CREATED_EVENT Apr 22 23:53:41.570025 containerd[1642]: time="2026-04-22T23:53:40.996396686Z" level=info msg="container event discarded" container=3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a type=CONTAINER_STARTED_EVENT Apr 22 23:53:43.057209 containerd[1642]: time="2026-04-22T23:53:43.050818140Z" level=info msg="container event discarded" container=9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd type=CONTAINER_CREATED_EVENT Apr 22 23:53:43.206151 containerd[1642]: time="2026-04-22T23:53:43.152050573Z" level=info msg="container event discarded" container=9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd type=CONTAINER_STARTED_EVENT Apr 22 23:53:43.544164 containerd[1642]: time="2026-04-22T23:53:43.525891238Z" level=info msg="container event discarded" container=82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8 type=CONTAINER_CREATED_EVENT Apr 22 23:53:44.049980 containerd[1642]: time="2026-04-22T23:53:43.993082265Z" level=info msg="container event discarded" container=82ed08deedb73a467b813a11abe8e519e015197241a12f6df5dd08870617b0a8 type=CONTAINER_STARTED_EVENT Apr 22 23:53:44.296363 containerd[1642]: time="2026-04-22T23:53:44.263578397Z" level=info msg="container event discarded" container=f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65 type=CONTAINER_CREATED_EVENT Apr 22 23:53:44.583394 kubelet[2933]: E0422 23:53:44.577925 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:44.880811 containerd[1642]: time="2026-04-22T23:53:44.528817533Z" level=info msg="container event discarded" container=f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1 type=CONTAINER_CREATED_EVENT Apr 22 23:53:45.419124 containerd[1642]: time="2026-04-22T23:53:44.799757817Z" level=info msg="container event discarded" container=0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef type=CONTAINER_CREATED_EVENT Apr 22 23:53:47.471717 containerd[1642]: time="2026-04-22T23:53:47.433242206Z" level=info msg="container event discarded" container=0cfe4f43d53ea2bcf2a2f3f6612f7c458a8ac147a47657d1e60f159900b07bef type=CONTAINER_STARTED_EVENT Apr 22 23:53:47.780790 containerd[1642]: time="2026-04-22T23:53:47.667215353Z" level=info msg="container event discarded" container=f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65 type=CONTAINER_STARTED_EVENT Apr 22 23:53:48.139203 containerd[1642]: time="2026-04-22T23:53:48.037388407Z" level=info msg="container event discarded" container=f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1 type=CONTAINER_STARTED_EVENT Apr 22 23:53:49.964555 containerd[1642]: time="2026-04-22T23:53:49.918226062Z" level=error msg="get state for 14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2" error="context deadline exceeded" Apr 22 23:53:50.379133 containerd[1642]: time="2026-04-22T23:53:50.006306397Z" level=warning msg="unknown status" status=0 Apr 22 23:53:51.133346 kubelet[2933]: E0422 23:53:51.132204 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:53:58.716303 containerd[1642]: time="2026-04-22T23:53:58.708013973Z" level=error msg="get state for 14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2" error="context deadline exceeded" Apr 22 23:53:59.159305 containerd[1642]: time="2026-04-22T23:53:58.782241210Z" level=warning msg="unknown status" status=0 Apr 22 23:54:04.694250 kubelet[2933]: I0422 23:54:04.692041 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sxdpp" podStartSLOduration=198.691746834 podStartE2EDuration="3m18.691746834s" podCreationTimestamp="2026-04-22 23:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:54:01.024113446 +0000 UTC m=+199.309459125" watchObservedRunningTime="2026-04-22 23:54:04.691746834 +0000 UTC m=+202.977092480" Apr 22 23:54:06.950857 kubelet[2933]: E0422 23:54:06.926392 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:08.571853 containerd[1642]: time="2026-04-22T23:54:08.290353789Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:54:09.583930 containerd[1642]: time="2026-04-22T23:54:09.574068165Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 22 23:54:21.986165 kubelet[2933]: E0422 23:54:21.984370 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m13.223s" Apr 22 23:54:23.895792 kubelet[2933]: E0422 23:54:23.856854 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:33.152343 containerd[1642]: time="2026-04-22T23:54:33.137097023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wttf5,Uid:cafe337f-d6a9-4ed4-8582-b10d21e57fb6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\"" Apr 22 23:54:34.940282 kubelet[2933]: E0422 23:54:34.939967 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:40.814229 kubelet[2933]: E0422 23:54:40.814048 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:44.299099 kubelet[2933]: E0422 23:54:44.298770 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:48.132481 kubelet[2933]: E0422 23:54:48.131588 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:48.976050 containerd[1642]: time="2026-04-22T23:54:48.937739511Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 22 23:54:49.303340 kubelet[2933]: E0422 23:54:49.289374 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="26.278s" Apr 22 23:54:51.460994 kubelet[2933]: E0422 23:54:51.448271 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:52.405905 kubelet[2933]: E0422 23:54:52.404147 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:54.568478 kubelet[2933]: E0422 23:54:54.367490 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:55.551243 kubelet[2933]: E0422 23:54:55.547059 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:54:58.163162 kubelet[2933]: E0422 23:54:58.161770 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:02.466540 kubelet[2933]: E0422 23:55:02.300970 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:02.905153 kubelet[2933]: E0422 23:55:02.901468 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.611s" Apr 22 23:55:03.959946 kubelet[2933]: E0422 23:55:03.953724 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.038s" Apr 22 23:55:07.162000 kubelet[2933]: E0422 23:55:07.159224 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.695s" Apr 22 23:55:08.133496 kubelet[2933]: E0422 23:55:08.132374 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:08.841748 kubelet[2933]: E0422 23:55:08.837243 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.436s" Apr 22 23:55:11.045216 kubelet[2933]: E0422 23:55:11.043850 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.65s" Apr 22 23:55:13.902098 kubelet[2933]: E0422 23:55:13.900773 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.327s" Apr 22 23:55:13.958291 kubelet[2933]: E0422 23:55:13.911286 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:19.191459 kubelet[2933]: E0422 23:55:19.189685 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:24.795563 kubelet[2933]: E0422 23:55:24.792785 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:24.898369 kubelet[2933]: E0422 23:55:24.861102 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.444s" Apr 22 23:55:26.630844 kubelet[2933]: E0422 23:55:26.627984 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 22 23:55:29.298059 kubelet[2933]: E0422 23:55:29.286472 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.552s" Apr 22 23:55:30.953370 kubelet[2933]: E0422 23:55:30.951004 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:31.369591 kubelet[2933]: E0422 23:55:31.345267 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.953s" Apr 22 23:55:34.791898 kubelet[2933]: E0422 23:55:34.791741 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.214s" Apr 22 23:55:37.153928 kubelet[2933]: E0422 23:55:37.141209 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:42.854078 kubelet[2933]: E0422 23:55:42.845155 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.443s" Apr 22 23:55:43.911395 kubelet[2933]: E0422 23:55:43.909236 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:46.550477 kubelet[2933]: E0422 23:55:46.550214 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.585s" Apr 22 23:55:47.648823 kubelet[2933]: E0422 23:55:47.647515 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.094s" Apr 22 23:55:48.985453 kubelet[2933]: E0422 23:55:48.972809 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.169s" Apr 22 23:55:50.139207 kubelet[2933]: E0422 23:55:50.137160 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:53.409932 kubelet[2933]: E0422 23:55:53.409627 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.975s" Apr 22 23:55:57.141554 kubelet[2933]: E0422 23:55:56.984324 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:55:57.666846 kubelet[2933]: E0422 23:55:57.666729 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.257s" Apr 22 23:55:57.838319 kubelet[2933]: E0422 23:55:57.838248 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:58.568270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731307578.mount: Deactivated successfully. Apr 22 23:56:00.739074 kubelet[2933]: E0422 23:56:00.665975 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.975s" Apr 22 23:56:02.280147 kubelet[2933]: E0422 23:56:02.279794 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.404s" Apr 22 23:56:02.600850 kubelet[2933]: E0422 23:56:02.581230 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:03.628467 kubelet[2933]: E0422 23:56:03.626285 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.251s" Apr 22 23:56:04.951672 kubelet[2933]: E0422 23:56:04.910959 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.28s" Apr 22 23:56:07.259433 kubelet[2933]: E0422 23:56:07.255385 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.864s" Apr 22 23:56:08.104932 containerd[1642]: time="2026-04-22T23:56:08.006320796Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:56:08.734315 kubelet[2933]: E0422 23:56:08.728066 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:08.902290 containerd[1642]: time="2026-04-22T23:56:08.788048049Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4854751" Apr 22 23:56:09.905311 containerd[1642]: time="2026-04-22T23:56:09.900702873Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:56:12.868844 kubelet[2933]: E0422 23:56:12.867088 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.358s" Apr 22 23:56:15.032557 kubelet[2933]: E0422 23:56:15.031354 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:15.385047 kubelet[2933]: E0422 23:56:15.384812 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:15.387233 containerd[1642]: time="2026-04-22T23:56:15.385178335Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:56:17.596866 containerd[1642]: time="2026-04-22T23:56:17.581789222Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1m28.572597277s" Apr 22 23:56:17.646795 containerd[1642]: time="2026-04-22T23:56:17.646571522Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 22 23:56:18.080530 kubelet[2933]: E0422 23:56:18.079135 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.023s" Apr 22 23:56:20.129654 kubelet[2933]: E0422 23:56:20.128506 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.043s" Apr 22 23:56:21.628280 kubelet[2933]: E0422 23:56:21.628192 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:22.274250 containerd[1642]: time="2026-04-22T23:56:22.273912642Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 22 23:56:22.425930 kubelet[2933]: E0422 23:56:22.292511 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:22.809964 kubelet[2933]: E0422 23:56:22.808935 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:23.743638 kubelet[2933]: E0422 23:56:23.688929 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.459s" Apr 22 23:56:25.472915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795404563.mount: Deactivated successfully. Apr 22 23:56:25.602138 containerd[1642]: time="2026-04-22T23:56:25.578274067Z" level=info msg="Container 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:56:26.537301 kubelet[2933]: E0422 23:56:26.537039 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.79s" Apr 22 23:56:28.750217 kubelet[2933]: E0422 23:56:28.748702 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:29.584977 containerd[1642]: time="2026-04-22T23:56:29.584389310Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\"" Apr 22 23:56:29.651111 kubelet[2933]: E0422 23:56:29.598399 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.106s" Apr 22 23:56:30.727063 containerd[1642]: time="2026-04-22T23:56:30.718787423Z" level=info msg="StartContainer for \"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\"" Apr 22 23:56:31.951928 containerd[1642]: time="2026-04-22T23:56:31.949158586Z" level=info msg="connecting to shim 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95" address="unix:///run/containerd/s/a35e04226de6ecfa8cf22490a46d8d2b80aaf1006f85bc829722ce185d694334" protocol=ttrpc version=3 Apr 22 23:56:34.932789 kubelet[2933]: E0422 23:56:34.908397 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:37.510189 systemd[1]: Started cri-containerd-95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95.scope - libcontainer container 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95. Apr 22 23:56:38.198772 kubelet[2933]: E0422 23:56:38.198581 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.564s" Apr 22 23:56:41.054870 kubelet[2933]: E0422 23:56:40.877909 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:42.079693 kubelet[2933]: E0422 23:56:42.076248 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.844s" Apr 22 23:56:42.357158 containerd[1642]: time="2026-04-22T23:56:42.345222816Z" level=error msg="get state for 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95" error="context deadline exceeded" Apr 22 23:56:42.440929 containerd[1642]: time="2026-04-22T23:56:42.389139101Z" level=warning msg="unknown status" status=0 Apr 22 23:56:43.661108 kubelet[2933]: E0422 23:56:43.659979 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.503s" Apr 22 23:56:45.041515 containerd[1642]: time="2026-04-22T23:56:45.032331166Z" level=error msg="get state for 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95" error="context deadline exceeded" Apr 22 23:56:45.041515 containerd[1642]: time="2026-04-22T23:56:45.035745740Z" level=warning msg="unknown status" status=0 Apr 22 23:56:45.924067 containerd[1642]: time="2026-04-22T23:56:45.918510845Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 22 23:56:45.958139 containerd[1642]: time="2026-04-22T23:56:45.933191955Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:56:46.888640 kubelet[2933]: E0422 23:56:46.888369 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.215s" Apr 22 23:56:47.531149 kubelet[2933]: E0422 23:56:47.529088 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:49.364990 systemd[1]: cri-containerd-95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95.scope: Deactivated successfully. Apr 22 23:56:49.561529 systemd[1]: cri-containerd-95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95.scope: Consumed 2.104s CPU time, 5.3M memory peak. Apr 22 23:56:49.979610 containerd[1642]: time="2026-04-22T23:56:49.964656715Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcafe337f_d6a9_4ed4_8582_b10d21e57fb6.slice/cri-containerd-95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95.scope/memory.events\": no such file or directory" Apr 22 23:56:50.372127 kubelet[2933]: E0422 23:56:50.350388 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.343s" Apr 22 23:56:51.302230 containerd[1642]: time="2026-04-22T23:56:51.297137846Z" level=info msg="received container exit event container_id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" pid:3336 exited_at:{seconds:1776902210 nanos:346040225}" Apr 22 23:56:51.422208 containerd[1642]: time="2026-04-22T23:56:51.302807496Z" level=info msg="container event discarded" container=c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067 type=CONTAINER_CREATED_EVENT Apr 22 23:56:51.422208 containerd[1642]: time="2026-04-22T23:56:51.383088275Z" level=info msg="container event discarded" container=c81d57d712cf7f49b50ce181dd0e2ae2662d697e3b0cfa37b63732a694beb067 type=CONTAINER_STARTED_EVENT Apr 22 23:56:51.422208 containerd[1642]: time="2026-04-22T23:56:51.384646249Z" level=info msg="StartContainer for \"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" returns successfully" Apr 22 23:56:55.449578 kubelet[2933]: E0422 23:56:55.442657 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:56:59.303270 kubelet[2933]: E0422 23:56:59.297396 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.839s" Apr 22 23:57:01.343914 containerd[1642]: time="2026-04-22T23:57:01.343666581Z" level=error msg="failed to handle container TaskExit event container_id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" pid:3336 exited_at:{seconds:1776902210 nanos:346040225}" error="failed to stop container: context deadline exceeded" Apr 22 23:57:01.567041 containerd[1642]: time="2026-04-22T23:57:01.555264543Z" level=info msg="container event discarded" container=920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817 type=CONTAINER_CREATED_EVENT Apr 22 23:57:02.033337 kubelet[2933]: E0422 23:57:01.999116 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:02.401986 containerd[1642]: time="2026-04-22T23:57:02.366382647Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 22 23:57:02.900264 containerd[1642]: time="2026-04-22T23:57:02.898217354Z" level=info msg="TaskExit event container_id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" pid:3336 exited_at:{seconds:1776902210 nanos:346040225}" Apr 22 23:57:03.060952 kubelet[2933]: E0422 23:57:03.049160 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:07.604748 kubelet[2933]: E0422 23:57:07.473948 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.12s" Apr 22 23:57:08.473560 kubelet[2933]: E0422 23:57:08.453130 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:10.145288 kubelet[2933]: E0422 23:57:10.143851 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:11.764513 kubelet[2933]: E0422 23:57:11.762885 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.722s" Apr 22 23:57:12.952338 containerd[1642]: time="2026-04-22T23:57:12.952166619Z" level=error msg="Failed to handle backOff event container_id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" pid:3336 exited_at:{seconds:1776902210 nanos:346040225} for 95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 22 23:57:13.557372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95-rootfs.mount: Deactivated successfully. Apr 22 23:57:13.682896 containerd[1642]: time="2026-04-22T23:57:13.538364422Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 22 23:57:14.881886 kubelet[2933]: E0422 23:57:14.879500 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:15.855783 containerd[1642]: time="2026-04-22T23:57:15.685748221Z" level=info msg="TaskExit event container_id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" id:\"95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95\" pid:3336 exited_at:{seconds:1776902210 nanos:346040225}" Apr 22 23:57:16.532264 kubelet[2933]: E0422 23:57:16.531173 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.552s" Apr 22 23:57:19.151700 kubelet[2933]: E0422 23:57:19.151578 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.618s" Apr 22 23:57:20.650718 kubelet[2933]: E0422 23:57:20.644253 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:21.801819 kubelet[2933]: E0422 23:57:21.801644 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.647s" Apr 22 23:57:22.110302 kubelet[2933]: E0422 23:57:22.106161 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:22.165058 kubelet[2933]: E0422 23:57:22.163163 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:22.791279 update_engine[1620]: I20260422 23:57:22.785838 1620 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 22 23:57:22.875583 update_engine[1620]: I20260422 23:57:22.791577 1620 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 22 23:57:22.875880 update_engine[1620]: I20260422 23:57:22.875823 1620 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 22 23:57:22.955599 update_engine[1620]: I20260422 23:57:22.941737 1620 omaha_request_params.cc:62] Current group set to beta Apr 22 23:57:23.030083 update_engine[1620]: I20260422 23:57:23.000253 1620 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 22 23:57:23.032857 update_engine[1620]: I20260422 23:57:23.032755 1620 update_attempter.cc:643] Scheduling an action processor start. Apr 22 23:57:23.033320 update_engine[1620]: I20260422 23:57:23.033174 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 22 23:57:23.049908 update_engine[1620]: I20260422 23:57:23.043205 1620 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 22 23:57:23.061855 update_engine[1620]: I20260422 23:57:23.061564 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 22 23:57:23.113893 update_engine[1620]: I20260422 23:57:23.066231 1620 omaha_request_action.cc:272] Request: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: Apr 22 23:57:23.113893 update_engine[1620]: I20260422 23:57:23.096794 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:57:23.353273 update_engine[1620]: I20260422 23:57:23.242235 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:57:23.353273 update_engine[1620]: I20260422 23:57:23.337317 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:57:23.407542 locksmithd[1685]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 22 23:57:23.408561 update_engine[1620]: E20260422 23:57:23.365099 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:57:23.408561 update_engine[1620]: I20260422 23:57:23.381042 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 22 23:57:24.243734 kubelet[2933]: E0422 23:57:24.195147 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.319s" Apr 22 23:57:24.434565 kubelet[2933]: E0422 23:57:24.432470 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:26.225123 kubelet[2933]: E0422 23:57:26.198358 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:26.667679 kubelet[2933]: E0422 23:57:26.667385 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.258s" Apr 22 23:57:26.996579 kubelet[2933]: E0422 23:57:26.984221 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:28.103088 containerd[1642]: time="2026-04-22T23:57:28.100331656Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 22 23:57:30.962807 kubelet[2933]: E0422 23:57:30.961357 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:31.939687 kubelet[2933]: E0422 23:57:31.911723 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:32.387003 kubelet[2933]: E0422 23:57:32.386577 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.984s" Apr 22 23:57:33.781085 update_engine[1620]: I20260422 23:57:33.780117 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:57:33.864515 update_engine[1620]: I20260422 23:57:33.800370 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:57:33.897962 update_engine[1620]: I20260422 23:57:33.895704 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:57:33.910373 update_engine[1620]: E20260422 23:57:33.905027 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:57:33.926850 update_engine[1620]: I20260422 23:57:33.922400 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 22 23:57:35.088884 kubelet[2933]: E0422 23:57:35.068211 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.664s" Apr 22 23:57:37.197474 kubelet[2933]: E0422 23:57:37.196237 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:37.477761 kubelet[2933]: E0422 23:57:37.460108 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.919s" Apr 22 23:57:38.871222 kubelet[2933]: E0422 23:57:38.863262 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.364s" Apr 22 23:57:41.697094 containerd[1642]: time="2026-04-22T23:57:41.692838319Z" level=info msg="container event discarded" container=920f17e9c3771e87b0295f5cc11b44791ff14cfeee79eed818e5f4d1c2c10817 type=CONTAINER_STARTED_EVENT Apr 22 23:57:42.703344 kubelet[2933]: E0422 23:57:42.583096 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.181s" Apr 22 23:57:43.783639 update_engine[1620]: I20260422 23:57:43.778360 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:57:44.332582 update_engine[1620]: I20260422 23:57:43.908334 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:57:44.332582 update_engine[1620]: I20260422 23:57:43.935339 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:57:44.332582 update_engine[1620]: E20260422 23:57:44.011806 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:57:44.332582 update_engine[1620]: I20260422 23:57:44.082379 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 22 23:57:44.438221 kubelet[2933]: E0422 23:57:44.082173 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:47.705264 kubelet[2933]: E0422 23:57:47.520727 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.687s" Apr 22 23:57:50.544590 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 22 23:57:51.377281 kubelet[2933]: E0422 23:57:51.377170 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:57:54.698465 systemd-tmpfiles[3390]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 22 23:57:54.707837 systemd-tmpfiles[3390]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 22 23:57:54.855992 update_engine[1620]: I20260422 23:57:54.794308 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:57:54.855992 update_engine[1620]: I20260422 23:57:54.824838 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:57:54.857844 update_engine[1620]: I20260422 23:57:54.857044 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:57:55.034158 update_engine[1620]: E20260422 23:57:55.013921 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:57:55.034158 update_engine[1620]: I20260422 23:57:55.014536 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 22 23:57:55.034158 update_engine[1620]: I20260422 23:57:55.014557 1620 omaha_request_action.cc:617] Omaha request response: Apr 22 23:57:55.034158 update_engine[1620]: E20260422 23:57:55.014817 1620 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 22 23:57:55.034158 update_engine[1620]: I20260422 23:57:55.014986 1620 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 22 23:57:55.034158 update_engine[1620]: I20260422 23:57:55.014999 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:57:55.034158 update_engine[1620]: I20260422 23:57:55.015004 1620 update_attempter.cc:306] Processing Done. Apr 22 23:57:55.014358 systemd-tmpfiles[3390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 22 23:57:55.251610 update_engine[1620]: E20260422 23:57:55.029098 1620 update_attempter.cc:619] Update failed. Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.064855 1620 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.132125 1620 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.132688 1620 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.139147 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.145696 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.146209 1620 omaha_request_action.cc:272] Request: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.146220 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:57:55.251610 update_engine[1620]: I20260422 23:57:55.197362 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.354147 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:57:55.557006 update_engine[1620]: E20260422 23:57:55.355589 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.355894 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.355904 1620 omaha_request_action.cc:617] Omaha request response: Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.356254 1620 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.357362 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.360139 1620 update_attempter.cc:306] Processing Done. Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.360308 1620 update_attempter.cc:310] Error event sent. Apr 22 23:57:55.557006 update_engine[1620]: I20260422 23:57:55.364909 1620 update_check_scheduler.cc:74] Next update check in 41m23s Apr 22 23:57:55.909984 locksmithd[1685]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 22 23:57:55.909984 locksmithd[1685]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 22 23:57:56.071679 systemd-tmpfiles[3390]: ACLs are not supported, ignoring. Apr 22 23:57:56.112284 systemd-tmpfiles[3390]: ACLs are not supported, ignoring. Apr 22 23:57:57.951843 systemd-tmpfiles[3390]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:57:57.951866 systemd-tmpfiles[3390]: Skipping /boot Apr 22 23:58:00.836554 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 22 23:58:00.883812 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 22 23:58:00.989987 systemd[1]: systemd-tmpfiles-clean.service: Consumed 3.366s CPU time, 4.5M memory peak. Apr 22 23:58:01.352568 kubelet[2933]: E0422 23:58:01.337689 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:04.186751 kubelet[2933]: E0422 23:58:04.184117 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.416s" Apr 22 23:58:07.912300 kubelet[2933]: E0422 23:58:07.904341 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.72s" Apr 22 23:58:09.277355 kubelet[2933]: E0422 23:58:09.052558 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:11.979739 kubelet[2933]: E0422 23:58:11.979522 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.023s" Apr 22 23:58:13.469109 kubelet[2933]: E0422 23:58:13.466060 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.361s" Apr 22 23:58:15.178847 kubelet[2933]: E0422 23:58:15.178665 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:15.629889 kubelet[2933]: E0422 23:58:15.628964 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.162s" Apr 22 23:58:19.140654 kubelet[2933]: E0422 23:58:19.139595 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.504s" Apr 22 23:58:21.319365 kubelet[2933]: E0422 23:58:21.318936 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:22.307848 kubelet[2933]: E0422 23:58:22.306664 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.104s" Apr 22 23:58:24.263715 kubelet[2933]: E0422 23:58:24.263623 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.934s" Apr 22 23:58:25.792439 kubelet[2933]: E0422 23:58:25.791608 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.503s" Apr 22 23:58:27.476045 kubelet[2933]: E0422 23:58:27.473838 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:31.403545 kubelet[2933]: E0422 23:58:31.391617 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.557s" Apr 22 23:58:34.348220 kubelet[2933]: E0422 23:58:34.345883 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:35.380857 kubelet[2933]: E0422 23:58:35.379949 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.988s" Apr 22 23:58:38.129767 kubelet[2933]: E0422 23:58:38.090604 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.488s" Apr 22 23:58:39.102269 kubelet[2933]: E0422 23:58:39.097807 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:39.506509 kubelet[2933]: E0422 23:58:39.487051 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:40.310648 kubelet[2933]: E0422 23:58:40.308591 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.061s" Apr 22 23:58:40.487501 kubelet[2933]: E0422 23:58:40.486245 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:43.639979 kubelet[2933]: E0422 23:58:43.639710 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.299s" Apr 22 23:58:47.047034 kubelet[2933]: E0422 23:58:47.040266 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.4s" Apr 22 23:58:47.063028 kubelet[2933]: E0422 23:58:47.060803 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:51.182102 kubelet[2933]: E0422 23:58:51.179226 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.117s" Apr 22 23:58:53.354871 kubelet[2933]: E0422 23:58:53.353344 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:58:53.739646 kubelet[2933]: E0422 23:58:53.666735 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.168s" Apr 22 23:58:55.574555 kubelet[2933]: E0422 23:58:55.572191 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.867s" Apr 22 23:58:57.986306 kubelet[2933]: E0422 23:58:57.986222 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.414s" Apr 22 23:58:58.213596 kubelet[2933]: E0422 23:58:58.213273 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:58.387910 kubelet[2933]: E0422 23:58:58.340377 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:00.180332 kubelet[2933]: E0422 23:59:00.180221 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:03.076494 kubelet[2933]: E0422 23:59:03.066862 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.982s" Apr 22 23:59:05.331922 kubelet[2933]: E0422 23:59:05.305036 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.2s" Apr 22 23:59:06.149937 kubelet[2933]: E0422 23:59:06.149317 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:08.194553 kubelet[2933]: E0422 23:59:08.179305 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.849s" Apr 22 23:59:09.943274 kubelet[2933]: E0422 23:59:09.930665 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.74s" Apr 22 23:59:12.002477 kubelet[2933]: E0422 23:59:11.983026 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.031s" Apr 22 23:59:12.277752 kubelet[2933]: E0422 23:59:12.239726 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:13.988886 kubelet[2933]: E0422 23:59:13.987956 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.92s" Apr 22 23:59:16.161079 kubelet[2933]: E0422 23:59:16.159330 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.083s" Apr 22 23:59:17.302479 kubelet[2933]: E0422 23:59:17.299247 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.124s" Apr 22 23:59:18.298307 kubelet[2933]: E0422 23:59:18.297276 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:20.103856 kubelet[2933]: E0422 23:59:20.101618 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.669s" Apr 22 23:59:22.364086 kubelet[2933]: E0422 23:59:22.361284 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.072s" Apr 22 23:59:24.076384 kubelet[2933]: E0422 23:59:24.073627 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:25.054631 kubelet[2933]: E0422 23:59:25.053258 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.681s" Apr 22 23:59:28.956346 kubelet[2933]: E0422 23:59:28.955318 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.895s" Apr 22 23:59:30.058080 kubelet[2933]: E0422 23:59:30.056148 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:33.207274 containerd[1642]: time="2026-04-22T23:59:33.198548460Z" level=info msg="container event discarded" container=14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2 type=CONTAINER_CREATED_EVENT Apr 22 23:59:33.407570 containerd[1642]: time="2026-04-22T23:59:33.250328572Z" level=info msg="container event discarded" container=14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2 type=CONTAINER_STARTED_EVENT Apr 22 23:59:34.550294 kubelet[2933]: E0422 23:59:34.548623 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.593s" Apr 22 23:59:36.724117 kubelet[2933]: E0422 23:59:36.706627 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:37.698078 kubelet[2933]: E0422 23:59:37.680300 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.127s" Apr 22 23:59:42.377780 kubelet[2933]: E0422 23:59:42.374931 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.678s" Apr 22 23:59:43.569697 kubelet[2933]: E0422 23:59:43.497134 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:47.213039 kubelet[2933]: E0422 23:59:47.208647 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.803s" Apr 22 23:59:50.269058 kubelet[2933]: E0422 23:59:50.215227 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 22 23:59:55.988878 kubelet[2933]: E0422 23:59:55.988444 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.54s" Apr 22 23:59:56.300269 kubelet[2933]: E0422 23:59:56.010632 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:00.212716 kubelet[2933]: E0423 00:00:00.199378 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.198s" Apr 23 00:00:01.774536 kubelet[2933]: E0423 00:00:01.774333 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:02.395541 kubelet[2933]: E0423 00:00:02.395284 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.187s" Apr 23 00:00:05.051744 kubelet[2933]: E0423 00:00:05.049180 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.525s" Apr 23 00:00:05.681339 kubelet[2933]: E0423 00:00:05.681086 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:05.824380 kubelet[2933]: E0423 00:00:05.679781 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:06.687302 kubelet[2933]: E0423 00:00:06.656235 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:07.203185 kubelet[2933]: E0423 00:00:07.147397 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.71s" Apr 23 00:00:07.851187 kubelet[2933]: E0423 00:00:07.850320 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:11.729499 kubelet[2933]: E0423 00:00:11.729282 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.914s" Apr 23 00:00:15.573273 kubelet[2933]: E0423 00:00:15.572876 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:18.512262 kubelet[2933]: E0423 00:00:18.512136 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.633s" Apr 23 00:00:19.093038 kubelet[2933]: E0423 00:00:18.746651 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:23.141169 kubelet[2933]: E0423 00:00:22.793749 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:26.145356 kubelet[2933]: E0423 00:00:26.145125 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.619s" Apr 23 00:00:30.348541 kubelet[2933]: E0423 00:00:30.341002 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.06s" Apr 23 00:00:30.724652 kubelet[2933]: E0423 00:00:30.718285 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:34.492096 kubelet[2933]: E0423 00:00:34.487621 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.122s" Apr 23 00:00:37.057058 kubelet[2933]: E0423 00:00:37.053345 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:40.637041 kubelet[2933]: E0423 00:00:40.636044 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.131s" Apr 23 00:00:43.667519 kubelet[2933]: E0423 00:00:43.666889 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:47.510718 kubelet[2933]: E0423 00:00:47.510175 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.654s" Apr 23 00:00:50.826992 kubelet[2933]: E0423 00:00:50.825100 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:00:53.285689 kubelet[2933]: E0423 00:00:53.277345 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.727s" Apr 23 00:00:54.564400 kubelet[2933]: E0423 00:00:54.556310 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.271s" Apr 23 00:00:55.943301 kubelet[2933]: E0423 00:00:55.943111 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:00.554609 kubelet[2933]: E0423 00:01:00.509936 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.042s" Apr 23 00:01:02.099212 kubelet[2933]: E0423 00:01:02.095672 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:08.537202 kubelet[2933]: E0423 00:01:08.517707 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.955s" Apr 23 00:01:08.980065 kubelet[2933]: E0423 00:01:08.857662 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:15.442645 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:37446.service - OpenSSH per-connection server daemon (10.0.0.1:37446). Apr 23 00:01:23.599364 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 37446 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:01:23.912506 kubelet[2933]: E0423 00:01:23.375080 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:23.992902 sshd-session[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:01:26.543356 systemd-logind[1615]: New session 9 of user core. Apr 23 00:01:26.769082 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 00:01:29.739823 containerd[1642]: time="2026-04-23T00:01:29.478791577Z" level=info msg="container event discarded" container=95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95 type=CONTAINER_CREATED_EVENT Apr 23 00:01:36.112944 kubelet[2933]: E0423 00:01:36.093234 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:38.584551 kubelet[2933]: E0423 00:01:38.569177 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.793s" Apr 23 00:01:47.254142 kubelet[2933]: E0423 00:01:47.251991 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:01:50.668646 containerd[1642]: time="2026-04-23T00:01:50.535311064Z" level=info msg="container event discarded" container=95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95 type=CONTAINER_STARTED_EVENT Apr 23 00:01:59.810960 kubelet[2933]: E0423 00:01:59.791349 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:01.126278 sshd[3437]: Connection closed by 10.0.0.1 port 37446 Apr 23 00:02:01.149128 sshd-session[3433]: pam_unix(sshd:session): session closed for user core Apr 23 00:02:02.364626 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:37446.service: Deactivated successfully. Apr 23 00:02:02.588355 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:37446.service: Consumed 2.373s CPU time, 4M memory peak. Apr 23 00:02:02.982161 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 00:02:03.097878 systemd[1]: session-9.scope: Consumed 17.186s CPU time, 15.3M memory peak. Apr 23 00:02:03.977641 systemd-logind[1615]: Session 9 logged out. Waiting for processes to exit. Apr 23 00:02:04.800360 systemd-logind[1615]: Removed session 9. Apr 23 00:02:09.005796 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Apr 23 00:02:17.666680 kubelet[2933]: E0423 00:02:17.655340 2933 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:01:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:01:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:01:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:01:53Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.13:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 23 00:02:23.072208 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:02:24.103371 sshd-session[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:02:24.440194 containerd[1642]: time="2026-04-23T00:02:24.405806538Z" level=info msg="container event discarded" container=95b0dd8aa573d582309c9130dbe8fdf97d29217fa6bb1c812496180fdbf50c95 type=CONTAINER_STOPPED_EVENT Apr 23 00:02:28.367082 kubelet[2933]: E0423 00:02:28.366811 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:02:28.453400 systemd-logind[1615]: New session 10 of user core. Apr 23 00:02:28.609569 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 00:02:43.427702 kubelet[2933]: E0423 00:02:43.253859 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:44.063371 systemd[1]: cri-containerd-f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65.scope: Deactivated successfully. Apr 23 00:02:44.241739 systemd[1]: cri-containerd-f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65.scope: Consumed 3min 2.492s CPU time, 51.2M memory peak. Apr 23 00:02:44.424397 systemd[1]: cri-containerd-f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1.scope: Deactivated successfully. Apr 23 00:02:44.553153 systemd[1]: cri-containerd-f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1.scope: Consumed 51.109s CPU time, 26M memory peak. Apr 23 00:02:45.352641 kubelet[2933]: E0423 00:02:45.276688 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m6.693s" Apr 23 00:02:45.544813 containerd[1642]: time="2026-04-23T00:02:45.346052776Z" level=info msg="received container exit event container_id:\"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\" id:\"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\" pid:2765 exit_status:1 exited_at:{seconds:1776902565 nanos:3326401}" Apr 23 00:02:45.751715 containerd[1642]: time="2026-04-23T00:02:45.688813525Z" level=info msg="received container exit event container_id:\"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\" id:\"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\" pid:2756 exit_status:1 exited_at:{seconds:1776902564 nanos:852636396}" Apr 23 00:02:45.775761 kubelet[2933]: E0423 00:02:45.603042 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:02:46.964015 kubelet[2933]: E0423 00:02:46.963842 2933 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 23 00:02:47.445157 kubelet[2933]: E0423 00:02:47.444785 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.168s" Apr 23 00:02:47.597747 sshd[3472]: Connection closed by 10.0.0.1 port 48534 Apr 23 00:02:47.601238 sshd-session[3468]: pam_unix(sshd:session): session closed for user core Apr 23 00:02:47.662108 kubelet[2933]: E0423 00:02:47.644785 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:47.711831 kubelet[2933]: E0423 00:02:47.711263 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:47.734869 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:48534.service: Deactivated successfully. Apr 23 00:02:47.736785 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:48534.service: Consumed 3.274s CPU time, 3.9M memory peak. Apr 23 00:02:47.796302 kubelet[2933]: E0423 00:02:47.761549 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:47.796302 kubelet[2933]: E0423 00:02:47.761739 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:47.949849 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 00:02:47.962043 systemd[1]: session-10.scope: Consumed 8.480s CPU time, 17M memory peak. Apr 23 00:02:48.124247 systemd-logind[1615]: Session 10 logged out. Waiting for processes to exit. Apr 23 00:02:48.348302 systemd-logind[1615]: Removed session 10. Apr 23 00:02:50.198107 kubelet[2933]: E0423 00:02:50.193334 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:52.235861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65-rootfs.mount: Deactivated successfully. Apr 23 00:02:55.168293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1-rootfs.mount: Deactivated successfully. Apr 23 00:02:55.792211 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:44110.service - OpenSSH per-connection server daemon (10.0.0.1:44110). Apr 23 00:02:56.876753 kubelet[2933]: E0423 00:02:56.875604 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:02:57.078547 containerd[1642]: time="2026-04-23T00:02:57.078043989Z" level=error msg="failed to delete shim" error="close wait error: context deadline exceeded" id=f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1 Apr 23 00:03:02.230865 kubelet[2933]: E0423 00:03:02.038380 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.602s" Apr 23 00:03:02.946030 kubelet[2933]: E0423 00:03:02.945668 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:02.955743 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 44110 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:03:03.168246 sshd-session[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:03:04.165598 kubelet[2933]: E0423 00:03:04.164705 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Apr 23 00:03:04.236305 kubelet[2933]: I0423 00:03:04.236268 2933 scope.go:117] "RemoveContainer" containerID="f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1" Apr 23 00:03:05.371290 systemd-logind[1615]: New session 11 of user core. Apr 23 00:03:05.568311 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 00:03:05.812363 kubelet[2933]: E0423 00:03:05.783023 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:08.467772 kubelet[2933]: E0423 00:03:08.443851 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.279s" Apr 23 00:03:09.564182 kubelet[2933]: E0423 00:03:09.561261 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:10.000687 containerd[1642]: time="2026-04-23T00:03:09.997383409Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 23 00:03:11.138695 kubelet[2933]: E0423 00:03:11.134350 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.439s" Apr 23 00:03:12.890630 kubelet[2933]: I0423 00:03:12.890028 2933 scope.go:117] "RemoveContainer" containerID="f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65" Apr 23 00:03:12.890630 kubelet[2933]: E0423 00:03:12.890183 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:03:16.060884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304631059.mount: Deactivated successfully. Apr 23 00:03:19.290716 containerd[1642]: time="2026-04-23T00:03:18.962791113Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 23 00:03:19.801289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938952206.mount: Deactivated successfully. Apr 23 00:03:20.052166 containerd[1642]: time="2026-04-23T00:03:19.349829666Z" level=info msg="Container a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:03:22.784134 kubelet[2933]: E0423 00:03:22.784030 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:22.846876 sshd[3526]: Connection closed by 10.0.0.1 port 44110 Apr 23 00:03:22.907061 sshd-session[3520]: pam_unix(sshd:session): session closed for user core Apr 23 00:03:23.540372 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:44110.service: Deactivated successfully. Apr 23 00:03:23.586641 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:44110.service: Consumed 2.443s CPU time, 4M memory peak. Apr 23 00:03:23.899395 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 00:03:24.062082 systemd[1]: session-11.scope: Consumed 8.120s CPU time, 17.6M memory peak. Apr 23 00:03:24.296021 kubelet[2933]: E0423 00:03:24.294733 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.892s" Apr 23 00:03:24.368211 systemd-logind[1615]: Session 11 logged out. Waiting for processes to exit. Apr 23 00:03:24.913737 systemd-logind[1615]: Removed session 11. Apr 23 00:03:26.969355 kubelet[2933]: E0423 00:03:26.964834 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.67s" Apr 23 00:03:28.446879 kubelet[2933]: E0423 00:03:28.439667 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:28.644075 containerd[1642]: time="2026-04-23T00:03:28.643877707Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\"" Apr 23 00:03:31.169266 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:54916.service - OpenSSH per-connection server daemon (10.0.0.1:54916). Apr 23 00:03:31.902730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049201649.mount: Deactivated successfully. Apr 23 00:03:32.582242 containerd[1642]: time="2026-04-23T00:03:32.552877497Z" level=info msg="Container aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:03:32.720174 containerd[1642]: time="2026-04-23T00:03:32.718913320Z" level=info msg="StartContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\"" Apr 23 00:03:38.500262 kubelet[2933]: E0423 00:03:38.488365 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:39.759070 containerd[1642]: time="2026-04-23T00:03:39.582168537Z" level=info msg="connecting to shim a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" address="unix:///run/containerd/s/52c20a04aff33a832cf09fe18f3d8420202bcd603b0b0170557108773320997c" protocol=ttrpc version=3 Apr 23 00:03:42.404299 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 54916 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:03:42.710894 sshd-session[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:03:44.489892 systemd-logind[1615]: New session 12 of user core. Apr 23 00:03:44.848716 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 00:03:45.312286 kubelet[2933]: E0423 00:03:45.312026 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:54.190045 kubelet[2933]: E0423 00:03:54.189762 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.214s" Apr 23 00:03:55.652789 containerd[1642]: time="2026-04-23T00:03:55.652652771Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\"" Apr 23 00:03:57.127297 kubelet[2933]: E0423 00:03:57.127187 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:03:57.612293 kubelet[2933]: I0423 00:03:57.577173 2933 scope.go:117] "RemoveContainer" containerID="f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65" Apr 23 00:04:02.132268 containerd[1642]: time="2026-04-23T00:04:02.113735517Z" level=info msg="StartContainer for \"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\"" Apr 23 00:04:02.909908 systemd[1]: Started cri-containerd-a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386.scope - libcontainer container a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386. Apr 23 00:04:04.819750 sshd[3554]: Connection closed by 10.0.0.1 port 54916 Apr 23 00:04:04.846304 sshd-session[3543]: pam_unix(sshd:session): session closed for user core Apr 23 00:04:05.918361 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:54916.service: Deactivated successfully. Apr 23 00:04:06.119125 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:54916.service: Consumed 3.490s CPU time, 4.2M memory peak. Apr 23 00:04:06.514263 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 00:04:06.661308 systemd[1]: session-12.scope: Consumed 8.821s CPU time, 17.5M memory peak. Apr 23 00:04:07.304123 systemd-logind[1615]: Session 12 logged out. Waiting for processes to exit. Apr 23 00:04:07.808752 systemd-logind[1615]: Removed session 12. Apr 23 00:04:09.527244 kubelet[2933]: E0423 00:04:09.512060 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:10.212887 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:45196.service - OpenSSH per-connection server daemon (10.0.0.1:45196). Apr 23 00:04:10.291241 kubelet[2933]: E0423 00:04:10.281858 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.491s" Apr 23 00:04:10.532761 containerd[1642]: time="2026-04-23T00:04:10.531930662Z" level=info msg="RemoveContainer for \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\"" Apr 23 00:04:10.592852 containerd[1642]: time="2026-04-23T00:04:10.591362588Z" level=info msg="connecting to shim aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a" address="unix:///run/containerd/s/3dfe3d6323e8f8f75559a13206af03e2868c85f01fbf4abe1aaa822e71aa8528" protocol=ttrpc version=3 Apr 23 00:04:10.615763 kubelet[2933]: E0423 00:04:10.613116 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:11.059903 kubelet[2933]: E0423 00:04:11.040673 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:04:13.858694 kubelet[2933]: E0423 00:04:13.858216 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.461s" Apr 23 00:04:14.755319 containerd[1642]: time="2026-04-23T00:04:14.755053057Z" level=info msg="RemoveContainer for \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\" returns successfully" Apr 23 00:04:14.812672 containerd[1642]: time="2026-04-23T00:04:14.811340481Z" level=error msg="ContainerStatus for \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\": not found" Apr 23 00:04:15.413938 kubelet[2933]: E0423 00:04:15.401845 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:15.590736 kubelet[2933]: E0423 00:04:15.420373 2933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65\": not found" containerID="f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65" Apr 23 00:04:15.591811 kubelet[2933]: I0423 00:04:15.591271 2933 scope.go:117] "RemoveContainer" containerID="f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1" Apr 23 00:04:15.928140 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 45196 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:04:16.391900 sshd-session[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:04:18.598947 systemd-logind[1615]: New session 13 of user core. Apr 23 00:04:18.843786 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 00:04:21.455276 systemd[1]: Started cri-containerd-aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a.scope - libcontainer container aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a. Apr 23 00:04:32.675299 kubelet[2933]: E0423 00:04:32.662384 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:04:42.488255 sshd[3623]: Connection closed by 10.0.0.1 port 45196 Apr 23 00:04:42.512940 sshd-session[3589]: pam_unix(sshd:session): session closed for user core Apr 23 00:04:43.734711 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:45196.service: Deactivated successfully. Apr 23 00:04:43.878082 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:45196.service: Consumed 1.694s CPU time, 4.2M memory peak. Apr 23 00:04:44.101056 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 00:04:44.190133 systemd[1]: session-13.scope: Consumed 10.714s CPU time, 17.7M memory peak. Apr 23 00:04:44.568376 systemd-logind[1615]: Session 13 logged out. Waiting for processes to exit. Apr 23 00:04:45.792940 systemd-logind[1615]: Removed session 13. Apr 23 00:04:48.934659 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Apr 23 00:04:59.082746 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:04:59.460140 sshd-session[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:04:59.953866 containerd[1642]: time="2026-04-23T00:04:59.245667029Z" level=info msg="RemoveContainer for \"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\"" Apr 23 00:05:01.680368 systemd-logind[1615]: New session 14 of user core. Apr 23 00:05:01.851853 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 00:05:02.777850 kubelet[2933]: E0423 00:05:02.775378 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:05:02.804809 kubelet[2933]: E0423 00:05:02.804688 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="48.588s" Apr 23 00:05:03.846601 containerd[1642]: time="2026-04-23T00:05:03.841378252Z" level=info msg="StartContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" returns successfully" Apr 23 00:05:18.350687 kubelet[2933]: E0423 00:05:16.439639 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:05:23.262904 containerd[1642]: time="2026-04-23T00:05:23.247627949Z" level=info msg="RemoveContainer for \"f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1\" returns successfully" Apr 23 00:05:31.035202 sshd[3662]: Connection closed by 10.0.0.1 port 39656 Apr 23 00:05:31.238582 sshd-session[3655]: pam_unix(sshd:session): session closed for user core Apr 23 00:05:32.212332 containerd[1642]: time="2026-04-23T00:05:32.066197523Z" level=info msg="StartContainer for \"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\" returns successfully" Apr 23 00:05:33.363942 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:39656.service: Deactivated successfully. Apr 23 00:05:33.384401 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:39656.service: Consumed 3.208s CPU time, 4M memory peak. Apr 23 00:05:33.852924 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 00:05:33.998343 systemd[1]: session-14.scope: Consumed 15.161s CPU time, 19M memory peak. Apr 23 00:05:35.254265 systemd-logind[1615]: Session 14 logged out. Waiting for processes to exit. Apr 23 00:05:36.993899 systemd-logind[1615]: Removed session 14. Apr 23 00:05:37.088719 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:37916.service - OpenSSH per-connection server daemon (10.0.0.1:37916). Apr 23 00:05:39.398113 kubelet[2933]: E0423 00:05:39.394278 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:05:45.494252 kubelet[2933]: E0423 00:05:45.491717 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="42.544s" Apr 23 00:05:48.356259 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 37916 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:05:49.351281 sshd-session[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:05:50.253946 kubelet[2933]: E0423 00:05:50.214989 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:05:50.601251 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 00:05:50.652401 systemd-logind[1615]: New session 15 of user core. Apr 23 00:05:53.785171 kubelet[2933]: E0423 00:05:53.635786 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.079s" Apr 23 00:06:02.187549 kubelet[2933]: E0423 00:06:02.185974 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:03.707372 sshd[3708]: Connection closed by 10.0.0.1 port 37916 Apr 23 00:06:03.949368 sshd-session[3700]: pam_unix(sshd:session): session closed for user core Apr 23 00:06:05.065782 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:37916.service: Deactivated successfully. Apr 23 00:06:05.186133 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:37916.service: Consumed 3.763s CPU time, 4.2M memory peak. Apr 23 00:06:05.264305 kubelet[2933]: E0423 00:06:05.258907 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:05.563686 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 00:06:05.745881 systemd[1]: session-15.scope: Consumed 6.735s CPU time, 15.8M memory peak. Apr 23 00:06:06.165811 systemd-logind[1615]: Session 15 logged out. Waiting for processes to exit. Apr 23 00:06:06.508179 systemd-logind[1615]: Removed session 15. Apr 23 00:06:07.509959 kubelet[2933]: E0423 00:06:07.509818 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:08.014185 kubelet[2933]: E0423 00:06:07.909354 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:10.332978 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:46230.service - OpenSSH per-connection server daemon (10.0.0.1:46230). Apr 23 00:06:15.782780 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:06:16.238692 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:06:17.581948 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 00:06:17.584316 systemd-logind[1615]: New session 16 of user core. Apr 23 00:06:17.804591 kubelet[2933]: E0423 00:06:17.733207 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:28.733858 kubelet[2933]: E0423 00:06:28.733810 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:39.112023 sshd[3740]: Connection closed by 10.0.0.1 port 46230 Apr 23 00:06:39.351859 sshd-session[3736]: pam_unix(sshd:session): session closed for user core Apr 23 00:06:40.754670 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:46230.service: Deactivated successfully. Apr 23 00:06:40.790695 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:46230.service: Consumed 1.523s CPU time, 4M memory peak. Apr 23 00:06:41.158664 systemd[1]: cri-containerd-aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a.scope: Deactivated successfully. Apr 23 00:06:41.215186 systemd[1]: cri-containerd-aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a.scope: Consumed 31.562s CPU time, 39.3M memory peak. Apr 23 00:06:41.375957 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 00:06:41.408123 systemd[1]: session-16.scope: Consumed 9.432s CPU time, 15.8M memory peak. Apr 23 00:06:41.598588 systemd-logind[1615]: Session 16 logged out. Waiting for processes to exit. Apr 23 00:06:41.634161 containerd[1642]: time="2026-04-23T00:06:41.633954536Z" level=info msg="received container exit event container_id:\"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\" id:\"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\" pid:3638 exit_status:1 exited_at:{seconds:1776902801 nanos:5592847}" Apr 23 00:06:41.660918 systemd[1]: Started systemd-sysupdate.service - Automatic System Update. Apr 23 00:06:41.751667 systemd-logind[1615]: Removed session 16. Apr 23 00:06:41.972202 kubelet[2933]: E0423 00:06:41.952959 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:42.055179 kubelet[2933]: E0423 00:06:42.030189 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="45.253s" Apr 23 00:06:42.447188 systemd-sysupdate[3757]: Discovering installed instances… Apr 23 00:06:42.453204 systemd-sysupdate[3757]: Discovering available instances… Apr 23 00:06:42.453309 systemd-sysupdate[3757]: Determining installed update sets… Apr 23 00:06:42.453313 systemd-sysupdate[3757]: Determining available update sets… Apr 23 00:06:42.453321 systemd-sysupdate[3757]: No update needed. Apr 23 00:06:42.558772 systemd[1]: systemd-sysupdate.service: Deactivated successfully. Apr 23 00:06:43.277982 kubelet[2933]: E0423 00:06:43.265541 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:43.744106 kubelet[2933]: E0423 00:06:43.743785 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:44.495739 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:36966.service - OpenSSH per-connection server daemon (10.0.0.1:36966). Apr 23 00:06:44.579870 kubelet[2933]: E0423 00:06:44.579781 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.141s" Apr 23 00:06:44.614971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a-rootfs.mount: Deactivated successfully. Apr 23 00:06:44.625861 kubelet[2933]: E0423 00:06:44.623681 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:44.789823 kubelet[2933]: E0423 00:06:44.781861 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:45.411288 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 36966 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:06:45.481237 sshd-session[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:06:45.763746 systemd-logind[1615]: New session 17 of user core. Apr 23 00:06:45.883863 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 00:06:47.255231 kubelet[2933]: I0423 00:06:47.247175 2933 scope.go:117] "RemoveContainer" containerID="aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a" Apr 23 00:06:48.045910 kubelet[2933]: E0423 00:06:48.041905 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:48.461841 kubelet[2933]: E0423 00:06:48.460966 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 23 00:06:48.680900 kubelet[2933]: E0423 00:06:48.460978 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:54.477268 sshd[3776]: Connection closed by 10.0.0.1 port 36966 Apr 23 00:06:54.509205 sshd-session[3766]: pam_unix(sshd:session): session closed for user core Apr 23 00:06:55.114383 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:36966.service: Deactivated successfully. Apr 23 00:06:55.356720 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 00:06:55.357226 systemd[1]: session-17.scope: Consumed 4.470s CPU time, 17.9M memory peak. Apr 23 00:06:55.448293 systemd-logind[1615]: Session 17 logged out. Waiting for processes to exit. Apr 23 00:06:55.462930 systemd-logind[1615]: Removed session 17. Apr 23 00:06:55.593033 kubelet[2933]: E0423 00:06:55.591665 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:06:55.631587 kubelet[2933]: E0423 00:06:55.621945 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.156s" Apr 23 00:06:56.647579 kubelet[2933]: I0423 00:06:56.644814 2933 scope.go:117] "RemoveContainer" containerID="aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a" Apr 23 00:06:56.661757 kubelet[2933]: E0423 00:06:56.661673 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:06:57.733693 containerd[1642]: time="2026-04-23T00:06:57.733528715Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 23 00:06:59.144926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714811552.mount: Deactivated successfully. Apr 23 00:06:59.182293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254059.mount: Deactivated successfully. Apr 23 00:06:59.240539 containerd[1642]: time="2026-04-23T00:06:59.239805898Z" level=info msg="Container 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:07:00.508560 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:59136.service - OpenSSH per-connection server daemon (10.0.0.1:59136). Apr 23 00:07:01.946836 kubelet[2933]: E0423 00:07:01.945010 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:02.519897 kubelet[2933]: E0423 00:07:02.510852 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.1s" Apr 23 00:07:03.862735 containerd[1642]: time="2026-04-23T00:07:03.862213592Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\"" Apr 23 00:07:04.143228 containerd[1642]: time="2026-04-23T00:07:04.120143216Z" level=info msg="StartContainer for \"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\"" Apr 23 00:07:04.508369 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 59136 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:04.532662 sshd-session[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:04.547906 containerd[1642]: time="2026-04-23T00:07:04.545649971Z" level=info msg="connecting to shim 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" address="unix:///run/containerd/s/3dfe3d6323e8f8f75559a13206af03e2868c85f01fbf4abe1aaa822e71aa8528" protocol=ttrpc version=3 Apr 23 00:07:04.671537 kubelet[2933]: E0423 00:07:04.667003 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.12s" Apr 23 00:07:04.753918 systemd-logind[1615]: New session 18 of user core. Apr 23 00:07:05.059310 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 00:07:06.665169 kubelet[2933]: E0423 00:07:06.664981 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.96s" Apr 23 00:07:11.730943 kubelet[2933]: E0423 00:07:11.730285 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:11.834816 kubelet[2933]: E0423 00:07:11.834641 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.157s" Apr 23 00:07:11.847033 sshd[3804]: Connection closed by 10.0.0.1 port 59136 Apr 23 00:07:11.843043 sshd-session[3797]: pam_unix(sshd:session): session closed for user core Apr 23 00:07:11.902896 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:59136.service: Deactivated successfully. Apr 23 00:07:11.903723 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:59136.service: Consumed 1.375s CPU time, 4.1M memory peak. Apr 23 00:07:11.916889 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 00:07:11.917737 systemd[1]: session-18.scope: Consumed 3.450s CPU time, 15.6M memory peak. Apr 23 00:07:12.054768 systemd-logind[1615]: Session 18 logged out. Waiting for processes to exit. Apr 23 00:07:12.118011 systemd[1]: Started cri-containerd-1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65.scope - libcontainer container 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65. Apr 23 00:07:12.560278 systemd-logind[1615]: Removed session 18. Apr 23 00:07:15.582820 kubelet[2933]: E0423 00:07:15.364365 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.614s" Apr 23 00:07:16.329309 containerd[1642]: time="2026-04-23T00:07:16.325944430Z" level=error msg="get state for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="context deadline exceeded" Apr 23 00:07:16.389849 containerd[1642]: time="2026-04-23T00:07:16.351852632Z" level=warning msg="unknown status" status=0 Apr 23 00:07:16.414890 containerd[1642]: time="2026-04-23T00:07:16.414680543Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 23 00:07:17.253764 kubelet[2933]: E0423 00:07:17.251243 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:17.550933 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:59550.service - OpenSSH per-connection server daemon (10.0.0.1:59550). Apr 23 00:07:17.704560 kubelet[2933]: E0423 00:07:17.704337 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.064s" Apr 23 00:07:20.611720 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 59550 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:20.875912 sshd-session[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:21.832311 systemd-logind[1615]: New session 19 of user core. Apr 23 00:07:22.027205 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 00:07:23.548208 kubelet[2933]: E0423 00:07:23.548025 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:24.248888 kubelet[2933]: E0423 00:07:24.200392 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.495s" Apr 23 00:07:25.712910 containerd[1642]: time="2026-04-23T00:07:25.369764329Z" level=info msg="StartContainer for \"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" returns successfully" Apr 23 00:07:29.537030 kubelet[2933]: E0423 00:07:29.533906 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:32.208709 sshd[3854]: Connection closed by 10.0.0.1 port 59550 Apr 23 00:07:32.174313 sshd-session[3839]: pam_unix(sshd:session): session closed for user core Apr 23 00:07:32.862278 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:59550.service: Deactivated successfully. Apr 23 00:07:32.974879 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:59550.service: Consumed 1.048s CPU time, 4.2M memory peak. Apr 23 00:07:33.264940 kubelet[2933]: E0423 00:07:33.013168 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:33.264966 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 00:07:33.299177 systemd[1]: session-19.scope: Consumed 5.132s CPU time, 15.6M memory peak. Apr 23 00:07:33.514337 systemd-logind[1615]: Session 19 logged out. Waiting for processes to exit. Apr 23 00:07:34.371176 systemd-logind[1615]: Removed session 19. Apr 23 00:07:38.785820 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:37364.service - OpenSSH per-connection server daemon (10.0.0.1:37364). Apr 23 00:07:39.821626 kubelet[2933]: E0423 00:07:39.820674 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.128s" Apr 23 00:07:39.894289 kubelet[2933]: E0423 00:07:39.821926 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:48.902043 kubelet[2933]: E0423 00:07:48.901893 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:07:49.502381 kubelet[2933]: E0423 00:07:49.493811 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:50.809736 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 37364 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:07:51.432222 sshd-session[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:07:53.762018 systemd-logind[1615]: New session 20 of user core. Apr 23 00:07:54.065852 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 00:07:54.628250 kubelet[2933]: E0423 00:07:52.480793 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:07:56.891032 containerd[1642]: time="2026-04-23T00:07:56.873896784Z" level=info msg="container event discarded" container=f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65 type=CONTAINER_STOPPED_EVENT Apr 23 00:08:04.937818 containerd[1642]: time="2026-04-23T00:08:02.766057915Z" level=info msg="container event discarded" container=f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1 type=CONTAINER_STOPPED_EVENT Apr 23 00:08:21.463865 kubelet[2933]: E0423 00:08:21.455344 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:08:26.425023 systemd[1]: cri-containerd-1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65.scope: Deactivated successfully. Apr 23 00:08:26.670067 systemd[1]: cri-containerd-1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65.scope: Consumed 12.657s CPU time, 20.6M memory peak. Apr 23 00:08:30.220965 systemd[1]: cri-containerd-a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386.scope: Deactivated successfully. Apr 23 00:08:30.566227 systemd[1]: cri-containerd-a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386.scope: Consumed 28.751s CPU time, 20.3M memory peak. Apr 23 00:08:31.809841 containerd[1642]: time="2026-04-23T00:08:31.147867153Z" level=info msg="container event discarded" container=a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386 type=CONTAINER_CREATED_EVENT Apr 23 00:08:32.412796 kubelet[2933]: E0423 00:08:31.959374 2933 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:07:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:07:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:07:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-23T00:07:51Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.13:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 23 00:08:34.268755 containerd[1642]: time="2026-04-23T00:08:33.486872835Z" level=error msg="post event" error="context deadline exceeded" Apr 23 00:08:35.051803 containerd[1642]: time="2026-04-23T00:08:32.286751055Z" level=info msg="received container exit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:08:35.421966 containerd[1642]: time="2026-04-23T00:08:34.237396958Z" level=info msg="received container exit event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169}" Apr 23 00:08:38.295790 containerd[1642]: time="2026-04-23T00:08:34.646042212Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 23 00:08:39.715051 containerd[1642]: time="2026-04-23T00:08:37.733809873Z" level=error msg="post event" error="context deadline exceeded" Apr 23 00:08:40.581916 sshd[3882]: Connection closed by 10.0.0.1 port 37364 Apr 23 00:08:40.748857 kubelet[2933]: E0423 00:08:40.567922 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:08:40.673836 sshd-session[3877]: pam_unix(sshd:session): session closed for user core Apr 23 00:08:40.827811 containerd[1642]: time="2026-04-23T00:08:40.152835693Z" level=error msg="forward event" error="context deadline exceeded" Apr 23 00:08:40.946333 containerd[1642]: time="2026-04-23T00:08:40.925912420Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 23 00:08:41.168695 containerd[1642]: time="2026-04-23T00:08:41.154367931Z" level=error msg="get state for a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" error="context deadline exceeded" Apr 23 00:08:41.375875 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:37364.service: Deactivated successfully. Apr 23 00:08:41.555349 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:37364.service: Consumed 3.538s CPU time, 4.2M memory peak. Apr 23 00:08:41.704770 containerd[1642]: time="2026-04-23T00:08:41.166040441Z" level=warning msg="unknown status" status=0 Apr 23 00:08:41.772986 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 00:08:41.790330 containerd[1642]: time="2026-04-23T00:08:40.939623025Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 23 00:08:41.875969 systemd[1]: session-20.scope: Consumed 19.129s CPU time, 17.9M memory peak. Apr 23 00:08:42.429944 systemd-logind[1615]: Session 20 logged out. Waiting for processes to exit. Apr 23 00:08:42.769778 containerd[1642]: time="2026-04-23T00:08:41.799184761Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 23 00:08:42.850223 systemd-logind[1615]: Removed session 20. Apr 23 00:08:42.881288 kubelet[2933]: E0423 00:08:42.881215 2933 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 23 00:08:43.269286 kubelet[2933]: E0423 00:08:43.265352 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:08:44.296676 kubelet[2933]: E0423 00:08:44.250996 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m4.419s" Apr 23 00:08:44.432580 kubelet[2933]: E0423 00:08:44.431882 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:44.784743 kubelet[2933]: E0423 00:08:44.784010 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:46.165213 containerd[1642]: time="2026-04-23T00:08:46.163736017Z" level=error msg="failed to handle container TaskExit event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:08:46.334970 containerd[1642]: time="2026-04-23T00:08:46.283664410Z" level=error msg="failed to handle container TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" error="failed to stop container: context deadline exceeded" Apr 23 00:08:46.334970 containerd[1642]: time="2026-04-23T00:08:46.296690273Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 23 00:08:46.334970 containerd[1642]: time="2026-04-23T00:08:46.296966410Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 23 00:08:46.556618 containerd[1642]: time="2026-04-23T00:08:46.554382609Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 23 00:08:46.703000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386-rootfs.mount: Deactivated successfully. Apr 23 00:08:46.906624 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:54198.service - OpenSSH per-connection server daemon (10.0.0.1:54198). Apr 23 00:08:47.750765 containerd[1642]: time="2026-04-23T00:08:47.743047411Z" level=info msg="TaskExit event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169}" Apr 23 00:08:48.563232 kubelet[2933]: E0423 00:08:48.562838 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:08:50.991928 kubelet[2933]: E0423 00:08:50.707760 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:08:51.435089 containerd[1642]: time="2026-04-23T00:08:51.408353265Z" level=info msg="container event discarded" container=aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a type=CONTAINER_CREATED_EVENT Apr 23 00:08:54.719740 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 54198 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:08:54.928946 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:08:56.654079 systemd-logind[1615]: New session 21 of user core. Apr 23 00:08:56.965384 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 23 00:08:58.191024 containerd[1642]: time="2026-04-23T00:08:58.183931704Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 23 00:08:58.378358 containerd[1642]: time="2026-04-23T00:08:58.271700946Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 23 00:08:58.378358 containerd[1642]: time="2026-04-23T00:08:58.272224546Z" level=error msg="Failed to handle backOff event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169} for a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 23 00:08:58.378358 containerd[1642]: time="2026-04-23T00:08:58.272294076Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:09:03.762695 kubelet[2933]: E0423 00:09:03.729095 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:04.684602 kubelet[2933]: E0423 00:09:04.683743 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.388s" Apr 23 00:09:05.637622 kubelet[2933]: E0423 00:09:05.617772 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:08.293213 containerd[1642]: time="2026-04-23T00:09:08.292854176Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:09:08.744949 containerd[1642]: time="2026-04-23T00:09:08.742920289Z" level=info msg="TaskExit event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169}" Apr 23 00:09:08.982919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65-rootfs.mount: Deactivated successfully. Apr 23 00:09:09.282637 containerd[1642]: time="2026-04-23T00:09:09.275360789Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 23 00:09:11.245093 kubelet[2933]: E0423 00:09:11.244827 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:11.468322 kubelet[2933]: E0423 00:09:11.468281 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.784s" Apr 23 00:09:11.630017 sshd[3932]: Connection closed by 10.0.0.1 port 54198 Apr 23 00:09:11.633058 sshd-session[3917]: pam_unix(sshd:session): session closed for user core Apr 23 00:09:12.581900 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:54198.service: Deactivated successfully. Apr 23 00:09:12.680822 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:54198.service: Consumed 2.286s CPU time, 4M memory peak. Apr 23 00:09:12.759335 kubelet[2933]: E0423 00:09:12.759080 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:12.760934 systemd[1]: session-21.scope: Deactivated successfully. Apr 23 00:09:12.859020 systemd[1]: session-21.scope: Consumed 6.891s CPU time, 17.7M memory peak. Apr 23 00:09:13.093739 systemd-logind[1615]: Session 21 logged out. Waiting for processes to exit. Apr 23 00:09:13.482224 systemd-logind[1615]: Removed session 21. Apr 23 00:09:14.188926 kubelet[2933]: E0423 00:09:14.184023 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.714s" Apr 23 00:09:14.966705 containerd[1642]: time="2026-04-23T00:09:14.950772180Z" level=info msg="container event discarded" container=f4dee17dd0007045dca9c3f7506acae4fa001341f4c097778e8947b6eea7bd65 type=CONTAINER_DELETED_EVENT Apr 23 00:09:16.592614 kubelet[2933]: E0423 00:09:16.591998 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.404s" Apr 23 00:09:16.979713 kubelet[2933]: E0423 00:09:16.974313 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:17.762046 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:47078.service - OpenSSH per-connection server daemon (10.0.0.1:47078). Apr 23 00:09:18.159949 kubelet[2933]: E0423 00:09:18.126370 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Apr 23 00:09:18.574659 containerd[1642]: time="2026-04-23T00:09:18.569988298Z" level=error msg="Failed to handle backOff event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169} for a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:09:18.613103 containerd[1642]: time="2026-04-23T00:09:18.575270992Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:09:19.090396 containerd[1642]: time="2026-04-23T00:09:19.074027393Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 23 00:09:19.953601 kubelet[2933]: E0423 00:09:19.952815 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.775s" Apr 23 00:09:24.631787 kubelet[2933]: E0423 00:09:24.629957 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:24.794236 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 47078 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:09:25.240102 sshd-session[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:09:26.781924 systemd-logind[1615]: New session 22 of user core. Apr 23 00:09:26.792281 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 23 00:09:26.856075 kubelet[2933]: E0423 00:09:26.793749 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.735s" Apr 23 00:09:28.849342 containerd[1642]: time="2026-04-23T00:09:28.844089741Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:09:28.849342 containerd[1642]: time="2026-04-23T00:09:28.844837640Z" level=info msg="TaskExit event container_id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" id:\"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" pid:3576 exit_status:1 exited_at:{seconds:1776902912 nanos:578283169}" Apr 23 00:09:28.849342 containerd[1642]: time="2026-04-23T00:09:28.845723098Z" level=info msg="StopContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" with timeout 30 (s)" Apr 23 00:09:30.625942 containerd[1642]: time="2026-04-23T00:09:30.622802901Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 23 00:09:30.765984 kubelet[2933]: E0423 00:09:30.765897 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:32.131207 containerd[1642]: time="2026-04-23T00:09:31.971682041Z" level=info msg="Ensure that container a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386 in task-service has been cleanup successfully" Apr 23 00:09:32.547968 containerd[1642]: time="2026-04-23T00:09:32.541019116Z" level=info msg="StopContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" returns successfully" Apr 23 00:09:33.979766 kubelet[2933]: E0423 00:09:33.975202 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:09:34.044671 containerd[1642]: time="2026-04-23T00:09:34.043316432Z" level=error msg="collecting metrics for a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" error="ttrpc: closed" Apr 23 00:09:36.890350 containerd[1642]: time="2026-04-23T00:09:36.883052155Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:09:39.451814 kubelet[2933]: E0423 00:09:39.445975 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:09:45.257791 containerd[1642]: time="2026-04-23T00:09:44.273924279Z" level=info msg="container event discarded" container=a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386 type=CONTAINER_STARTED_EVENT Apr 23 00:09:54.510331 sshd[3999]: Connection closed by 10.0.0.1 port 47078 Apr 23 00:09:54.700268 sshd-session[3978]: pam_unix(sshd:session): session closed for user core Apr 23 00:09:58.307080 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:47078.service: Deactivated successfully. Apr 23 00:09:58.924884 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:47078.service: Consumed 1.911s CPU time, 4.2M memory peak. Apr 23 00:09:59.893823 systemd[1]: session-22.scope: Deactivated successfully. Apr 23 00:10:00.305034 systemd[1]: session-22.scope: Consumed 13.657s CPU time, 19.5M memory peak. Apr 23 00:10:01.297935 containerd[1642]: time="2026-04-23T00:09:59.988625507Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 23 00:10:01.757090 containerd[1642]: time="2026-04-23T00:10:01.303012044Z" level=error msg="get state for 3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a" error="context deadline exceeded" Apr 23 00:10:01.757090 containerd[1642]: time="2026-04-23T00:10:01.358818698Z" level=warning msg="unknown status" status=0 Apr 23 00:10:01.553088 systemd-logind[1615]: Session 22 logged out. Waiting for processes to exit. Apr 23 00:10:09.163033 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:44492.service - OpenSSH per-connection server daemon (10.0.0.1:44492). Apr 23 00:10:13.347003 systemd-logind[1615]: Removed session 22. Apr 23 00:10:21.286965 kubelet[2933]: E0423 00:10:21.273030 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 23 00:10:25.360979 containerd[1642]: time="2026-04-23T00:10:25.308692432Z" level=info msg="container event discarded" container=aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a type=CONTAINER_STARTED_EVENT Apr 23 00:10:30.054106 containerd[1642]: time="2026-04-23T00:10:29.648142841Z" level=info msg="container event discarded" container=f40c0457601524fd1ee74c002291ef6c3d91f3edb20b05016691220a95be93f1 type=CONTAINER_DELETED_EVENT Apr 23 00:10:30.569122 containerd[1642]: time="2026-04-23T00:10:30.036698148Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 23 00:10:36.157125 containerd[1642]: time="2026-04-23T00:10:36.157036852Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 23 00:10:36.761965 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 44492 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:10:40.339221 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:10:40.871869 containerd[1642]: time="2026-04-23T00:10:40.864128577Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 23 00:10:41.349900 kubelet[2933]: E0423 00:10:28.513026 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:10:41.486109 containerd[1642]: time="2026-04-23T00:10:41.399116575Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 23 00:10:43.729983 containerd[1642]: time="2026-04-23T00:10:43.704067607Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:10:43.763263 systemd-logind[1615]: New session 23 of user core. Apr 23 00:10:43.941744 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 23 00:10:44.393919 containerd[1642]: time="2026-04-23T00:10:43.764992646Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28816162" Apr 23 00:10:48.789123 containerd[1642]: time="2026-04-23T00:10:48.788860070Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:10:49.087637 kubelet[2933]: E0423 00:10:49.082033 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m21.619s" Apr 23 00:10:51.273995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514880934.mount: Deactivated successfully. Apr 23 00:10:54.454324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698166671.mount: Deactivated successfully. Apr 23 00:10:55.199053 containerd[1642]: time="2026-04-23T00:10:55.193125484Z" level=info msg="Container 6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:10:56.574367 containerd[1642]: time="2026-04-23T00:10:56.273881675Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:10:57.474009 kubelet[2933]: E0423 00:10:57.459646 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:09.680352 containerd[1642]: time="2026-04-23T00:11:09.680116949Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 23 00:11:09.794158 containerd[1642]: time="2026-04-23T00:11:09.776249287Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 23 00:11:10.365660 containerd[1642]: time="2026-04-23T00:11:10.362740503Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 23 00:11:16.675601 sshd[4034]: Connection closed by 10.0.0.1 port 44492 Apr 23 00:11:16.675017 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Apr 23 00:11:16.790083 kubelet[2933]: E0423 00:11:16.676008 2933 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 00:11:17.858883 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:44492.service: Deactivated successfully. Apr 23 00:11:17.988991 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:44492.service: Consumed 6.590s CPU time, 4M memory peak. Apr 23 00:11:18.039953 kubelet[2933]: E0423 00:11:17.988164 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:18.099932 systemd[1]: session-23.scope: Deactivated successfully. Apr 23 00:11:18.176279 systemd[1]: session-23.scope: Consumed 16.004s CPU time, 15.4M memory peak. Apr 23 00:11:18.344339 systemd-logind[1615]: Session 23 logged out. Waiting for processes to exit. Apr 23 00:11:18.910585 systemd-logind[1615]: Removed session 23. Apr 23 00:11:19.068946 kubelet[2933]: E0423 00:11:19.068043 2933 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 23 00:11:19.962994 containerd[1642]: time="2026-04-23T00:11:19.962857837Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 00:11:23.747580 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:39494.service - OpenSSH per-connection server daemon (10.0.0.1:39494). Apr 23 00:11:25.066492 containerd[1642]: time="2026-04-23T00:11:25.065264357Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 13m56.891010689s" Apr 23 00:11:25.066492 containerd[1642]: time="2026-04-23T00:11:25.066177261Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 23 00:11:25.137100 containerd[1642]: time="2026-04-23T00:11:25.065950926Z" level=info msg="CreateContainer within sandbox \"9c9db74f9f562891448db2db4e40d4f185e32e190ba41d407668b34b512632bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c\"" Apr 23 00:11:25.749062 kubelet[2933]: E0423 00:11:25.744619 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:26.280702 containerd[1642]: time="2026-04-23T00:11:26.263101435Z" level=info msg="StartContainer for \"6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c\"" Apr 23 00:11:26.733759 containerd[1642]: time="2026-04-23T00:11:26.733376216Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:11:28.206170 kubelet[2933]: E0423 00:11:28.206068 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="36.961s" Apr 23 00:11:29.579976 containerd[1642]: time="2026-04-23T00:11:29.578953750Z" level=info msg="connecting to shim 6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c" address="unix:///run/containerd/s/52c20a04aff33a832cf09fe18f3d8420202bcd603b0b0170557108773320997c" protocol=ttrpc version=3 Apr 23 00:11:30.966855 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 39494 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:11:31.884136 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:11:34.136843 systemd-logind[1615]: New session 24 of user core. Apr 23 00:11:34.464082 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 23 00:11:38.355107 containerd[1642]: time="2026-04-23T00:11:37.421808956Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 23 00:11:38.681941 containerd[1642]: time="2026-04-23T00:11:38.379977308Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 23 00:11:38.928287 containerd[1642]: time="2026-04-23T00:11:38.400119836Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 23 00:11:39.207162 kubelet[2933]: E0423 00:11:39.192051 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:43.451973 systemd[1]: Started cri-containerd-6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c.scope - libcontainer container 6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c. Apr 23 00:11:45.354005 containerd[1642]: time="2026-04-23T00:11:45.352920788Z" level=info msg="container event discarded" container=aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a type=CONTAINER_STOPPED_EVENT Apr 23 00:11:48.167155 containerd[1642]: time="2026-04-23T00:11:47.676184621Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 23 00:11:51.802806 kubelet[2933]: E0423 00:11:51.270193 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:55.187550 kubelet[2933]: I0423 00:11:55.187094 2933 scope.go:117] "RemoveContainer" containerID="a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" Apr 23 00:11:58.242906 sshd[4084]: Connection closed by 10.0.0.1 port 39494 Apr 23 00:11:58.253248 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Apr 23 00:11:58.524167 kubelet[2933]: E0423 00:11:58.393228 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.149s" Apr 23 00:11:58.538514 kubelet[2933]: E0423 00:11:58.538230 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:11:58.752980 containerd[1642]: time="2026-04-23T00:11:58.746370846Z" level=info msg="RemoveContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\"" Apr 23 00:11:58.792881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410541367.mount: Deactivated successfully. Apr 23 00:11:59.079604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395028902.mount: Deactivated successfully. Apr 23 00:11:59.151763 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:39494.service: Deactivated successfully. Apr 23 00:11:59.160833 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:39494.service: Consumed 2.419s CPU time, 4.2M memory peak. Apr 23 00:11:59.255370 systemd[1]: session-24.scope: Deactivated successfully. Apr 23 00:11:59.288526 systemd[1]: session-24.scope: Consumed 11.470s CPU time, 15.2M memory peak. Apr 23 00:11:59.574043 systemd-logind[1615]: Session 24 logged out. Waiting for processes to exit. Apr 23 00:11:59.650516 containerd[1642]: time="2026-04-23T00:11:59.639245225Z" level=info msg="Container c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:11:59.861042 kubelet[2933]: E0423 00:11:59.773244 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.375s" Apr 23 00:12:00.016080 systemd-logind[1615]: Removed session 24. Apr 23 00:12:00.087391 kubelet[2933]: E0423 00:11:59.962315 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:00.276055 kubelet[2933]: E0423 00:12:00.272709 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:00.584977 kubelet[2933]: E0423 00:12:00.574629 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:01.554985 containerd[1642]: time="2026-04-23T00:12:01.549127718Z" level=info msg="RemoveContainer for \"a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386\" returns successfully" Apr 23 00:12:02.371119 kubelet[2933]: E0423 00:12:02.260933 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.399s" Apr 23 00:12:04.055364 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Apr 23 00:12:04.490944 containerd[1642]: time="2026-04-23T00:12:04.283173941Z" level=info msg="container event discarded" container=1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65 type=CONTAINER_CREATED_EVENT Apr 23 00:12:06.849581 containerd[1642]: time="2026-04-23T00:12:04.578032900Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\"" Apr 23 00:12:07.861713 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:08.054014 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:08.343926 kubelet[2933]: E0423 00:12:08.333925 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:12:08.536002 containerd[1642]: time="2026-04-23T00:12:08.535891179Z" level=info msg="StartContainer for \"6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c\" returns successfully" Apr 23 00:12:08.545809 containerd[1642]: time="2026-04-23T00:12:08.538017007Z" level=info msg="StartContainer for \"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\"" Apr 23 00:12:08.776175 systemd-logind[1615]: New session 25 of user core. Apr 23 00:12:08.879768 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 23 00:12:09.379703 containerd[1642]: time="2026-04-23T00:12:09.378685517Z" level=info msg="connecting to shim c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b" address="unix:///run/containerd/s/a35e04226de6ecfa8cf22490a46d8d2b80aaf1006f85bc829722ce185d694334" protocol=ttrpc version=3 Apr 23 00:12:11.283562 kubelet[2933]: E0423 00:12:11.283227 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.534s" Apr 23 00:12:11.869870 containerd[1642]: time="2026-04-23T00:12:11.853130620Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:12:13.050712 systemd[1]: Started cri-containerd-c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b.scope - libcontainer container c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b. Apr 23 00:12:15.342106 kubelet[2933]: E0423 00:12:15.338213 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:12:17.462577 kubelet[2933]: E0423 00:12:17.462211 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.854s" Apr 23 00:12:18.654514 containerd[1642]: time="2026-04-23T00:12:18.652597693Z" level=error msg="get state for c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b" error="context deadline exceeded" Apr 23 00:12:18.822961 sshd[4134]: Connection closed by 10.0.0.1 port 39406 Apr 23 00:12:19.012362 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Apr 23 00:12:19.713885 containerd[1642]: time="2026-04-23T00:12:19.351024799Z" level=warning msg="unknown status" status=0 Apr 23 00:12:19.824113 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:39406.service: Deactivated successfully. Apr 23 00:12:19.830951 containerd[1642]: time="2026-04-23T00:12:19.830400429Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:12:19.835681 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:39406.service: Consumed 1.085s CPU time, 4.2M memory peak. Apr 23 00:12:20.243700 systemd[1]: session-25.scope: Deactivated successfully. Apr 23 00:12:20.333367 systemd[1]: session-25.scope: Consumed 5.085s CPU time, 15.8M memory peak. Apr 23 00:12:20.624544 systemd-logind[1615]: Session 25 logged out. Waiting for processes to exit. Apr 23 00:12:21.081210 systemd-logind[1615]: Removed session 25. Apr 23 00:12:21.271627 kubelet[2933]: E0423 00:12:21.270100 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:21.852153 containerd[1642]: time="2026-04-23T00:12:21.849786936Z" level=error msg="Failed to handle backOff event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815} for 1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:12:22.325850 kubelet[2933]: E0423 00:12:22.325734 2933 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 00:12:22.574166 containerd[1642]: time="2026-04-23T00:12:22.572185892Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 23 00:12:22.906786 kubelet[2933]: E0423 00:12:22.904121 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.25s" Apr 23 00:12:22.960595 kubelet[2933]: I0423 00:12:22.959001 2933 status_manager.go:355] "Container readiness changed for unknown container" pod="kube-system/kube-scheduler-localhost" containerID="containerd://a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386" Apr 23 00:12:23.569259 kubelet[2933]: E0423 00:12:23.568857 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:23.999739 containerd[1642]: time="2026-04-23T00:12:23.988197846Z" level=info msg="container event discarded" container=1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65 type=CONTAINER_STARTED_EVENT Apr 23 00:12:24.593150 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:45038.service - OpenSSH per-connection server daemon (10.0.0.1:45038). Apr 23 00:12:26.456136 systemd[1]: cri-containerd-c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b.scope: Deactivated successfully. Apr 23 00:12:28.376834 containerd[1642]: time="2026-04-23T00:12:28.369968385Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcafe337f_d6a9_4ed4_8582_b10d21e57fb6.slice/cri-containerd-c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b.scope/memory.events\": no such file or directory" Apr 23 00:12:29.376938 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 45038 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:29.724185 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:30.111246 containerd[1642]: time="2026-04-23T00:12:30.101709578Z" level=info msg="received container exit event container_id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" pid:4158 exited_at:{seconds:1776903148 nanos:369751154}" Apr 23 00:12:30.269628 kubelet[2933]: E0423 00:12:30.154242 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:30.788110 systemd-logind[1615]: New session 26 of user core. Apr 23 00:12:30.814941 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 23 00:12:32.641965 kubelet[2933]: E0423 00:12:32.206233 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.199s" Apr 23 00:12:33.864351 containerd[1642]: time="2026-04-23T00:12:33.861209639Z" level=info msg="StartContainer for \"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" returns successfully" Apr 23 00:12:40.761773 containerd[1642]: time="2026-04-23T00:12:40.752144826Z" level=error msg="failed to handle container TaskExit event container_id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" pid:4158 exited_at:{seconds:1776903148 nanos:369751154}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 23 00:12:41.673825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b-rootfs.mount: Deactivated successfully. Apr 23 00:12:41.915076 containerd[1642]: time="2026-04-23T00:12:41.910396943Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 23 00:12:42.705219 containerd[1642]: time="2026-04-23T00:12:42.695232976Z" level=info msg="TaskExit event container_id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" id:\"c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b\" pid:4158 exited_at:{seconds:1776903148 nanos:369751154}" Apr 23 00:12:43.016279 sshd[4197]: Connection closed by 10.0.0.1 port 45038 Apr 23 00:12:43.012522 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Apr 23 00:12:43.515189 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:45038.service: Deactivated successfully. Apr 23 00:12:43.628565 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:45038.service: Consumed 1.700s CPU time, 4.2M memory peak. Apr 23 00:12:43.991788 systemd[1]: session-26.scope: Deactivated successfully. Apr 23 00:12:44.041472 systemd[1]: session-26.scope: Consumed 6.474s CPU time, 16.2M memory peak. Apr 23 00:12:44.263382 systemd-logind[1615]: Session 26 logged out. Waiting for processes to exit. Apr 23 00:12:44.615017 systemd-logind[1615]: Removed session 26. Apr 23 00:12:45.040787 kubelet[2933]: E0423 00:12:45.020714 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.115s" Apr 23 00:12:46.888170 kubelet[2933]: E0423 00:12:46.872349 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:47.046503 kubelet[2933]: E0423 00:12:46.965478 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:47.347882 kubelet[2933]: E0423 00:12:47.289676 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.198s" Apr 23 00:12:48.338165 kubelet[2933]: E0423 00:12:48.336264 2933 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 23 00:12:48.344263 kubelet[2933]: E0423 00:12:48.344160 2933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-flannel-cfg podName:cafe337f-d6a9-4ed4-8582-b10d21e57fb6 nodeName:}" failed. No retries permitted until 2026-04-23 00:12:48.844133983 +0000 UTC m=+1327.129479626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/cafe337f-d6a9-4ed4-8582-b10d21e57fb6-flannel-cfg") pod "kube-flannel-ds-wttf5" (UID: "cafe337f-d6a9-4ed4-8582-b10d21e57fb6") : failed to sync configmap cache: timed out waiting for the condition Apr 23 00:12:48.364618 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:45738.service - OpenSSH per-connection server daemon (10.0.0.1:45738). Apr 23 00:12:49.084148 kubelet[2933]: E0423 00:12:49.084075 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.794s" Apr 23 00:12:49.383865 kubelet[2933]: E0423 00:12:49.382169 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:50.547844 kubelet[2933]: E0423 00:12:50.545225 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:51.176167 kubelet[2933]: E0423 00:12:51.176066 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:12:51.346664 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 45738 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:51.552195 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:52.074212 containerd[1642]: time="2026-04-23T00:12:52.070100243Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 23 00:12:52.115272 systemd-logind[1615]: New session 27 of user core. Apr 23 00:12:52.364234 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 23 00:12:53.146709 containerd[1642]: time="2026-04-23T00:12:53.146607149Z" level=info msg="Container 849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:12:53.249197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262448757.mount: Deactivated successfully. Apr 23 00:12:53.350798 containerd[1642]: time="2026-04-23T00:12:53.350645200Z" level=info msg="CreateContainer within sandbox \"14110e952a6558f28bbb26fcf748cdebac38c6a24ba5d2a4c183508330e167e2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b\"" Apr 23 00:12:53.363776 containerd[1642]: time="2026-04-23T00:12:53.362081193Z" level=info msg="StartContainer for \"849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b\"" Apr 23 00:12:53.543724 containerd[1642]: time="2026-04-23T00:12:53.542167308Z" level=info msg="connecting to shim 849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b" address="unix:///run/containerd/s/a35e04226de6ecfa8cf22490a46d8d2b80aaf1006f85bc829722ce185d694334" protocol=ttrpc version=3 Apr 23 00:12:54.059923 systemd[1]: Started cri-containerd-849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b.scope - libcontainer container 849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b. Apr 23 00:12:55.168511 sshd[4250]: Connection closed by 10.0.0.1 port 45738 Apr 23 00:12:55.173638 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Apr 23 00:12:55.460427 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:45738.service: Deactivated successfully. Apr 23 00:12:55.492938 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:45738.service: Consumed 1.111s CPU time, 4M memory peak. Apr 23 00:12:55.591765 systemd[1]: session-27.scope: Deactivated successfully. Apr 23 00:12:55.592501 systemd[1]: session-27.scope: Consumed 1.584s CPU time, 17.9M memory peak. Apr 23 00:12:55.675988 systemd-logind[1615]: Session 27 logged out. Waiting for processes to exit. Apr 23 00:12:55.778473 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:52636.service - OpenSSH per-connection server daemon (10.0.0.1:52636). Apr 23 00:12:55.782169 systemd-logind[1615]: Removed session 27. Apr 23 00:12:56.815910 containerd[1642]: time="2026-04-23T00:12:56.812554918Z" level=info msg="StartContainer for \"849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b\" returns successfully" Apr 23 00:12:57.126628 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 52636 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:12:57.138052 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:12:57.657881 systemd-logind[1615]: New session 28 of user core. Apr 23 00:12:57.679894 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 23 00:12:59.077113 systemd-networkd[1542]: flannel.1: Link UP Apr 23 00:12:59.077152 systemd-networkd[1542]: flannel.1: Gained carrier Apr 23 00:12:59.310123 kubelet[2933]: E0423 00:12:59.308981 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:00.350054 kubelet[2933]: I0423 00:13:00.344006 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wttf5" podStartSLOduration=252.993538901 podStartE2EDuration="20m52.343768125s" podCreationTimestamp="2026-04-22 23:52:08 +0000 UTC" firstStartedPulling="2026-04-22 23:54:47.425764069 +0000 UTC m=+245.711109704" lastFinishedPulling="2026-04-23 00:11:26.775993292 +0000 UTC m=+1245.061338928" observedRunningTime="2026-04-23 00:13:00.293544607 +0000 UTC m=+1338.578890244" watchObservedRunningTime="2026-04-23 00:13:00.343768125 +0000 UTC m=+1338.629113785" Apr 23 00:13:00.349075 systemd-networkd[1542]: flannel.1: Gained IPv6LL Apr 23 00:13:00.534481 kubelet[2933]: E0423 00:13:00.534379 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Apr 23 00:13:02.764200 kubelet[2933]: E0423 00:13:02.763969 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.211s" Apr 23 00:13:03.599252 sshd[4302]: Connection closed by 10.0.0.1 port 52636 Apr 23 00:13:03.668371 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:03.788240 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:52636.service: Deactivated successfully. Apr 23 00:13:03.833532 systemd[1]: session-28.scope: Deactivated successfully. Apr 23 00:13:03.836339 systemd[1]: session-28.scope: Consumed 3.084s CPU time, 25.6M memory peak. Apr 23 00:13:03.857856 systemd-logind[1615]: Session 28 logged out. Waiting for processes to exit. Apr 23 00:13:03.981622 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:52644.service - OpenSSH per-connection server daemon (10.0.0.1:52644). Apr 23 00:13:04.002827 systemd-logind[1615]: Removed session 28. Apr 23 00:13:05.143705 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 52644 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:05.196566 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:05.681489 systemd-logind[1615]: New session 29 of user core. Apr 23 00:13:05.879261 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 23 00:13:09.838605 sshd[4380]: Connection closed by 10.0.0.1 port 52644 Apr 23 00:13:09.845696 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:10.199142 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:52644.service: Deactivated successfully. Apr 23 00:13:10.610900 systemd[1]: session-29.scope: Deactivated successfully. Apr 23 00:13:10.728149 systemd[1]: session-29.scope: Consumed 2.428s CPU time, 16.4M memory peak. Apr 23 00:13:10.889586 systemd-logind[1615]: Session 29 logged out. Waiting for processes to exit. Apr 23 00:13:11.184982 systemd-logind[1615]: Removed session 29. Apr 23 00:13:13.761286 kubelet[2933]: E0423 00:13:13.750340 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:15.291934 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:47174.service - OpenSSH per-connection server daemon (10.0.0.1:47174). Apr 23 00:13:15.904062 kubelet[2933]: E0423 00:13:15.900379 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:16.067009 kubelet[2933]: E0423 00:13:16.066970 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:17.775190 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 47174 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:17.784374 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:18.347848 systemd-logind[1615]: New session 30 of user core. Apr 23 00:13:18.377781 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 23 00:13:19.984568 sshd[4444]: Connection closed by 10.0.0.1 port 47174 Apr 23 00:13:19.988615 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:20.215597 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:47174.service: Deactivated successfully. Apr 23 00:13:20.216491 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:47174.service: Consumed 1.027s CPU time, 4M memory peak. Apr 23 00:13:20.362174 systemd[1]: session-30.scope: Deactivated successfully. Apr 23 00:13:20.405890 systemd[1]: session-30.scope: Consumed 1.025s CPU time, 16.3M memory peak. Apr 23 00:13:20.494937 systemd-logind[1615]: Session 30 logged out. Waiting for processes to exit. Apr 23 00:13:20.599049 systemd-logind[1615]: Removed session 30. Apr 23 00:13:25.392600 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:39250.service - OpenSSH per-connection server daemon (10.0.0.1:39250). Apr 23 00:13:26.568268 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 39250 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:26.595530 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:26.716678 containerd[1642]: time="2026-04-23T00:13:26.715980051Z" level=info msg="TaskExit event container_id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" id:\"1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65\" pid:3831 exit_status:1 exited_at:{seconds:1776902908 nanos:197011815}" Apr 23 00:13:26.949026 systemd-logind[1615]: New session 31 of user core. Apr 23 00:13:26.998592 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 23 00:13:28.975377 sshd[4494]: Connection closed by 10.0.0.1 port 39250 Apr 23 00:13:28.981915 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:29.133002 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:39250.service: Deactivated successfully. Apr 23 00:13:29.286096 systemd[1]: session-31.scope: Deactivated successfully. Apr 23 00:13:29.393306 systemd[1]: session-31.scope: Consumed 1.046s CPU time, 16M memory peak. Apr 23 00:13:29.461013 systemd-logind[1615]: Session 31 logged out. Waiting for processes to exit. Apr 23 00:13:29.623274 systemd-logind[1615]: Removed session 31. Apr 23 00:13:29.670550 kubelet[2933]: I0423 00:13:29.670109 2933 scope.go:117] "RemoveContainer" containerID="aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a" Apr 23 00:13:29.678162 kubelet[2933]: I0423 00:13:29.671918 2933 scope.go:117] "RemoveContainer" containerID="1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65" Apr 23 00:13:29.691170 kubelet[2933]: E0423 00:13:29.689104 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:29.780596 containerd[1642]: time="2026-04-23T00:13:29.779730684Z" level=info msg="RemoveContainer for \"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\"" Apr 23 00:13:29.780596 containerd[1642]: time="2026-04-23T00:13:29.779884539Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 23 00:13:29.909788 containerd[1642]: time="2026-04-23T00:13:29.909032689Z" level=info msg="RemoveContainer for \"aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a\" returns successfully" Apr 23 00:13:29.946240 containerd[1642]: time="2026-04-23T00:13:29.945993981Z" level=info msg="Container 7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:13:30.380657 containerd[1642]: time="2026-04-23T00:13:30.380463144Z" level=info msg="CreateContainer within sandbox \"3a52df8745692d6d701aeca319c0f89cb4c1c1d062a7028de80f7b311649ba0a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0\"" Apr 23 00:13:30.416153 containerd[1642]: time="2026-04-23T00:13:30.415714357Z" level=info msg="StartContainer for \"7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0\"" Apr 23 00:13:30.497141 containerd[1642]: time="2026-04-23T00:13:30.496941166Z" level=info msg="connecting to shim 7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0" address="unix:///run/containerd/s/3dfe3d6323e8f8f75559a13206af03e2868c85f01fbf4abe1aaa822e71aa8528" protocol=ttrpc version=3 Apr 23 00:13:31.088492 systemd[1]: Started cri-containerd-7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0.scope - libcontainer container 7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0. Apr 23 00:13:33.282647 containerd[1642]: time="2026-04-23T00:13:33.282503976Z" level=info msg="StartContainer for \"7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0\" returns successfully" Apr 23 00:13:34.174683 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:41142.service - OpenSSH per-connection server daemon (10.0.0.1:41142). Apr 23 00:13:34.928179 kubelet[2933]: E0423 00:13:34.895323 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:35.686846 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 41142 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:35.809165 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:36.441287 systemd-logind[1615]: New session 32 of user core. Apr 23 00:13:36.494647 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 23 00:13:40.090512 kubelet[2933]: E0423 00:13:40.089347 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:40.992138 sshd[4590]: Connection closed by 10.0.0.1 port 41142 Apr 23 00:13:41.014150 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:41.315139 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:41142.service: Deactivated successfully. Apr 23 00:13:41.424810 systemd[1]: session-32.scope: Deactivated successfully. Apr 23 00:13:41.429781 systemd[1]: session-32.scope: Consumed 2.816s CPU time, 18M memory peak. Apr 23 00:13:41.546072 systemd-logind[1615]: Session 32 logged out. Waiting for processes to exit. Apr 23 00:13:41.559471 systemd-logind[1615]: Removed session 32. Apr 23 00:13:46.386640 systemd[1]: Started sshd@31-10.0.0.13:22-10.0.0.1:53022.service - OpenSSH per-connection server daemon (10.0.0.1:53022). Apr 23 00:13:46.766831 kubelet[2933]: E0423 00:13:46.741956 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.334s" Apr 23 00:13:47.470090 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 53022 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:47.486475 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:47.593048 systemd-logind[1615]: New session 33 of user core. Apr 23 00:13:47.602169 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 23 00:13:48.255683 sshd[4650]: Connection closed by 10.0.0.1 port 53022 Apr 23 00:13:48.256333 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:48.261939 systemd-logind[1615]: Session 33 logged out. Waiting for processes to exit. Apr 23 00:13:48.262303 systemd[1]: sshd@31-10.0.0.13:22-10.0.0.1:53022.service: Deactivated successfully. Apr 23 00:13:48.264489 systemd[1]: session-33.scope: Deactivated successfully. Apr 23 00:13:48.273888 systemd-logind[1615]: Removed session 33. Apr 23 00:13:49.615535 kubelet[2933]: E0423 00:13:49.613878 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:13:53.661261 systemd[1]: Started sshd@32-10.0.0.13:22-10.0.0.1:53028.service - OpenSSH per-connection server daemon (10.0.0.1:53028). Apr 23 00:13:54.341586 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 53028 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:13:54.348851 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:13:54.528761 systemd-logind[1615]: New session 34 of user core. Apr 23 00:13:54.540042 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 23 00:13:55.597347 sshd[4690]: Connection closed by 10.0.0.1 port 53028 Apr 23 00:13:55.610345 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Apr 23 00:13:55.670153 systemd[1]: sshd@32-10.0.0.13:22-10.0.0.1:53028.service: Deactivated successfully. Apr 23 00:13:55.724638 systemd[1]: session-34.scope: Deactivated successfully. Apr 23 00:13:55.749313 systemd-logind[1615]: Session 34 logged out. Waiting for processes to exit. Apr 23 00:13:55.751305 systemd-logind[1615]: Removed session 34. Apr 23 00:14:01.096196 systemd[1]: Started sshd@33-10.0.0.13:22-10.0.0.1:57288.service - OpenSSH per-connection server daemon (10.0.0.1:57288). Apr 23 00:14:02.879956 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 57288 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:03.048143 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:03.331767 systemd-logind[1615]: New session 35 of user core. Apr 23 00:14:03.344048 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 23 00:14:03.975733 kubelet[2933]: E0423 00:14:03.915791 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:04.553586 kubelet[2933]: E0423 00:14:04.500196 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.067s" Apr 23 00:14:05.696297 sshd[4740]: Connection closed by 10.0.0.1 port 57288 Apr 23 00:14:05.699198 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:05.732954 systemd[1]: sshd@33-10.0.0.13:22-10.0.0.1:57288.service: Deactivated successfully. Apr 23 00:14:05.743346 systemd[1]: session-35.scope: Deactivated successfully. Apr 23 00:14:05.744208 systemd[1]: session-35.scope: Consumed 1.513s CPU time, 16M memory peak. Apr 23 00:14:05.760261 systemd-logind[1615]: Session 35 logged out. Waiting for processes to exit. Apr 23 00:14:05.763555 systemd-logind[1615]: Removed session 35. Apr 23 00:14:10.939138 systemd[1]: Started sshd@34-10.0.0.13:22-10.0.0.1:36166.service - OpenSSH per-connection server daemon (10.0.0.1:36166). Apr 23 00:14:12.654664 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 36166 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:12.718179 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:13.158267 systemd-logind[1615]: New session 36 of user core. Apr 23 00:14:13.255331 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 23 00:14:14.394945 kubelet[2933]: E0423 00:14:14.394350 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:15.848375 sshd[4789]: Connection closed by 10.0.0.1 port 36166 Apr 23 00:14:15.862354 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:15.928841 systemd-logind[1615]: Session 36 logged out. Waiting for processes to exit. Apr 23 00:14:15.928890 systemd[1]: sshd@34-10.0.0.13:22-10.0.0.1:36166.service: Deactivated successfully. Apr 23 00:14:15.938296 systemd[1]: session-36.scope: Deactivated successfully. Apr 23 00:14:15.942163 systemd[1]: session-36.scope: Consumed 1.604s CPU time, 15.7M memory peak. Apr 23 00:14:15.953248 systemd-logind[1615]: Removed session 36. Apr 23 00:14:21.461343 systemd[1]: Started sshd@35-10.0.0.13:22-10.0.0.1:44340.service - OpenSSH per-connection server daemon (10.0.0.1:44340). Apr 23 00:14:23.544590 kubelet[2933]: E0423 00:14:23.543253 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.924s" Apr 23 00:14:25.012999 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 44340 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:25.144141 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:25.327681 kubelet[2933]: E0423 00:14:25.326928 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.777s" Apr 23 00:14:25.328737 kubelet[2933]: E0423 00:14:25.328711 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:25.827318 systemd-logind[1615]: New session 37 of user core. Apr 23 00:14:25.916939 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 23 00:14:26.394247 kubelet[2933]: E0423 00:14:26.394011 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:26.791764 sshd[4860]: Connection closed by 10.0.0.1 port 44340 Apr 23 00:14:26.801171 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:26.861305 systemd[1]: sshd@35-10.0.0.13:22-10.0.0.1:44340.service: Deactivated successfully. Apr 23 00:14:26.924086 systemd[1]: session-37.scope: Deactivated successfully. Apr 23 00:14:27.044907 systemd-logind[1615]: Session 37 logged out. Waiting for processes to exit. Apr 23 00:14:27.086346 systemd-logind[1615]: Removed session 37. Apr 23 00:14:32.050705 systemd[1]: Started sshd@36-10.0.0.13:22-10.0.0.1:52304.service - OpenSSH per-connection server daemon (10.0.0.1:52304). Apr 23 00:14:33.901377 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 52304 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:34.040326 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:34.608370 systemd-logind[1615]: New session 38 of user core. Apr 23 00:14:34.738949 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 23 00:14:36.821743 containerd[1642]: time="2026-04-23T00:14:36.813278648Z" level=info msg="container event discarded" container=a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386 type=CONTAINER_STOPPED_EVENT Apr 23 00:14:37.463170 sshd[4902]: Connection closed by 10.0.0.1 port 52304 Apr 23 00:14:37.462678 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:37.676134 systemd[1]: sshd@36-10.0.0.13:22-10.0.0.1:52304.service: Deactivated successfully. Apr 23 00:14:37.790197 systemd[1]: session-38.scope: Deactivated successfully. Apr 23 00:14:37.918233 systemd[1]: session-38.scope: Consumed 1.700s CPU time, 15.8M memory peak. Apr 23 00:14:37.950303 systemd-logind[1615]: Session 38 logged out. Waiting for processes to exit. Apr 23 00:14:38.075932 systemd-logind[1615]: Removed session 38. Apr 23 00:14:42.656338 systemd[1]: Started sshd@37-10.0.0.13:22-10.0.0.1:58906.service - OpenSSH per-connection server daemon (10.0.0.1:58906). Apr 23 00:14:43.094787 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 58906 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:43.108329 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:43.131992 systemd-logind[1615]: New session 39 of user core. Apr 23 00:14:43.154067 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 23 00:14:44.043805 sshd[4956]: Connection closed by 10.0.0.1 port 58906 Apr 23 00:14:44.051350 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:44.200282 systemd[1]: sshd@37-10.0.0.13:22-10.0.0.1:58906.service: Deactivated successfully. Apr 23 00:14:44.252318 systemd[1]: session-39.scope: Deactivated successfully. Apr 23 00:14:44.296799 systemd-logind[1615]: Session 39 logged out. Waiting for processes to exit. Apr 23 00:14:44.401816 systemd-logind[1615]: Removed session 39. Apr 23 00:14:49.339220 systemd[1]: Started sshd@38-10.0.0.13:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962). Apr 23 00:14:50.783388 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:14:50.826758 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:14:51.039699 systemd-logind[1615]: New session 40 of user core. Apr 23 00:14:51.101749 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 23 00:14:54.310271 sshd[4994]: Connection closed by 10.0.0.1 port 46962 Apr 23 00:14:54.327237 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Apr 23 00:14:54.397307 systemd[1]: sshd@38-10.0.0.13:22-10.0.0.1:46962.service: Deactivated successfully. Apr 23 00:14:54.540309 systemd[1]: session-40.scope: Deactivated successfully. Apr 23 00:14:54.545187 systemd[1]: session-40.scope: Consumed 2.076s CPU time, 17.1M memory peak. Apr 23 00:14:54.593110 systemd-logind[1615]: Session 40 logged out. Waiting for processes to exit. Apr 23 00:14:54.750088 systemd-logind[1615]: Removed session 40. Apr 23 00:14:55.563941 kubelet[2933]: E0423 00:14:55.563803 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:14:59.839916 systemd[1]: Started sshd@39-10.0.0.13:22-10.0.0.1:50814.service - OpenSSH per-connection server daemon (10.0.0.1:50814). Apr 23 00:15:01.063817 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 50814 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:01.255744 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:01.685744 systemd-logind[1615]: New session 41 of user core. Apr 23 00:15:01.787970 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 23 00:15:03.686988 sshd[5055]: Connection closed by 10.0.0.1 port 50814 Apr 23 00:15:03.689959 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:03.765398 systemd[1]: sshd@39-10.0.0.13:22-10.0.0.1:50814.service: Deactivated successfully. Apr 23 00:15:03.846044 systemd[1]: session-41.scope: Deactivated successfully. Apr 23 00:15:03.847616 systemd[1]: session-41.scope: Consumed 1.257s CPU time, 15M memory peak. Apr 23 00:15:03.858039 systemd-logind[1615]: Session 41 logged out. Waiting for processes to exit. Apr 23 00:15:03.883613 systemd-logind[1615]: Removed session 41. Apr 23 00:15:09.165303 systemd[1]: Started sshd@40-10.0.0.13:22-10.0.0.1:39898.service - OpenSSH per-connection server daemon (10.0.0.1:39898). Apr 23 00:15:11.507357 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 39898 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:11.674728 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:12.148325 systemd-logind[1615]: New session 42 of user core. Apr 23 00:15:12.150888 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 23 00:15:12.862211 kubelet[2933]: E0423 00:15:12.862164 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.45s" Apr 23 00:15:16.332032 sshd[5105]: Connection closed by 10.0.0.1 port 39898 Apr 23 00:15:16.349203 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:16.612385 systemd[1]: sshd@40-10.0.0.13:22-10.0.0.1:39898.service: Deactivated successfully. Apr 23 00:15:16.741026 systemd[1]: session-42.scope: Deactivated successfully. Apr 23 00:15:16.760225 systemd[1]: session-42.scope: Consumed 2.230s CPU time, 15.1M memory peak. Apr 23 00:15:16.888354 systemd-logind[1615]: Session 42 logged out. Waiting for processes to exit. Apr 23 00:15:17.082025 systemd-logind[1615]: Removed session 42. Apr 23 00:15:21.851247 systemd[1]: Started sshd@41-10.0.0.13:22-10.0.0.1:43426.service - OpenSSH per-connection server daemon (10.0.0.1:43426). Apr 23 00:15:22.334130 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 43426 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:22.338040 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:22.681200 systemd-logind[1615]: New session 43 of user core. Apr 23 00:15:22.835061 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 23 00:15:27.008742 sshd[5163]: Connection closed by 10.0.0.1 port 43426 Apr 23 00:15:27.075105 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:27.187902 systemd[1]: sshd@41-10.0.0.13:22-10.0.0.1:43426.service: Deactivated successfully. Apr 23 00:15:27.354067 systemd[1]: session-43.scope: Deactivated successfully. Apr 23 00:15:27.360940 systemd[1]: session-43.scope: Consumed 2.426s CPU time, 15.6M memory peak. Apr 23 00:15:27.434306 systemd-logind[1615]: Session 43 logged out. Waiting for processes to exit. Apr 23 00:15:27.457292 kubelet[2933]: E0423 00:15:27.453163 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:27.460067 systemd[1]: Started sshd@42-10.0.0.13:22-10.0.0.1:39988.service - OpenSSH per-connection server daemon (10.0.0.1:39988). Apr 23 00:15:27.491791 systemd-logind[1615]: Removed session 43. Apr 23 00:15:28.816158 kubelet[2933]: E0423 00:15:28.815999 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.37s" Apr 23 00:15:31.033752 kubelet[2933]: E0423 00:15:31.033366 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.581s" Apr 23 00:15:31.743810 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 39988 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:31.939076 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:32.770326 kubelet[2933]: E0423 00:15:32.768075 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:33.174115 systemd-logind[1615]: New session 44 of user core. Apr 23 00:15:33.257286 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 23 00:15:38.220337 kubelet[2933]: E0423 00:15:38.216773 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.824s" Apr 23 00:15:41.697131 kubelet[2933]: E0423 00:15:41.695225 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.291s" Apr 23 00:15:42.031945 kubelet[2933]: E0423 00:15:42.024383 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:47.030801 kubelet[2933]: E0423 00:15:47.029960 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.634s" Apr 23 00:15:49.195169 sshd[5208]: Connection closed by 10.0.0.1 port 39988 Apr 23 00:15:49.211302 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Apr 23 00:15:49.239353 systemd[1]: sshd@42-10.0.0.13:22-10.0.0.1:39988.service: Deactivated successfully. Apr 23 00:15:49.241138 systemd[1]: sshd@42-10.0.0.13:22-10.0.0.1:39988.service: Consumed 1.201s CPU time, 4M memory peak. Apr 23 00:15:49.255294 systemd[1]: session-44.scope: Deactivated successfully. Apr 23 00:15:49.274093 systemd[1]: session-44.scope: Consumed 8.325s CPU time, 27.4M memory peak. Apr 23 00:15:49.383772 systemd-logind[1615]: Session 44 logged out. Waiting for processes to exit. Apr 23 00:15:49.641253 systemd[1]: Started sshd@43-10.0.0.13:22-10.0.0.1:38292.service - OpenSSH per-connection server daemon (10.0.0.1:38292). Apr 23 00:15:49.725382 systemd-logind[1615]: Removed session 44. Apr 23 00:15:50.466811 kubelet[2933]: E0423 00:15:50.463290 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:15:51.135714 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 38292 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:15:51.260881 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:15:52.060253 systemd-logind[1615]: New session 45 of user core. Apr 23 00:15:52.174770 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 23 00:15:52.891910 kubelet[2933]: E0423 00:15:52.890245 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.495s" Apr 23 00:15:59.109800 kubelet[2933]: E0423 00:15:59.101184 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.691s" Apr 23 00:16:08.482945 kubelet[2933]: E0423 00:16:08.482855 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.697s" Apr 23 00:16:09.842852 kubelet[2933]: E0423 00:16:09.832230 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.192s" Apr 23 00:16:10.327827 kubelet[2933]: E0423 00:16:10.325208 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:12.699947 kubelet[2933]: E0423 00:16:12.698655 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.307s" Apr 23 00:16:18.833797 kubelet[2933]: E0423 00:16:18.833219 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.184s" Apr 23 00:16:21.475798 containerd[1642]: time="2026-04-23T00:16:21.364124931Z" level=info msg="container event discarded" container=6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c type=CONTAINER_CREATED_EVENT Apr 23 00:16:23.722039 kubelet[2933]: E0423 00:16:23.721323 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.229s" Apr 23 00:16:25.422150 kubelet[2933]: E0423 00:16:25.415396 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.678s" Apr 23 00:16:33.894237 kubelet[2933]: E0423 00:16:33.885220 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.329s" Apr 23 00:16:34.743748 kubelet[2933]: E0423 00:16:34.742376 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:39.775795 sshd[5273]: Connection closed by 10.0.0.1 port 38292 Apr 23 00:16:39.851954 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:40.610926 systemd[1]: Started sshd@44-10.0.0.13:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Apr 23 00:16:40.982904 systemd[1]: sshd@43-10.0.0.13:22-10.0.0.1:38292.service: Deactivated successfully. Apr 23 00:16:41.096644 systemd[1]: session-45.scope: Deactivated successfully. Apr 23 00:16:41.106347 systemd[1]: session-45.scope: Consumed 20.146s CPU time, 37.1M memory peak. Apr 23 00:16:41.173252 systemd-logind[1615]: Session 45 logged out. Waiting for processes to exit. Apr 23 00:16:41.361277 systemd-logind[1615]: Removed session 45. Apr 23 00:16:42.260499 kubelet[2933]: E0423 00:16:42.260008 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.784s" Apr 23 00:16:43.230781 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:43.304849 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:43.718323 systemd-logind[1615]: New session 46 of user core. Apr 23 00:16:43.850063 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 23 00:16:47.660894 kubelet[2933]: E0423 00:16:47.658868 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:50.817811 sshd[5417]: Connection closed by 10.0.0.1 port 35372 Apr 23 00:16:50.894559 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:51.290015 systemd[1]: Started sshd@45-10.0.0.13:22-10.0.0.1:60354.service - OpenSSH per-connection server daemon (10.0.0.1:60354). Apr 23 00:16:51.370981 systemd[1]: sshd@44-10.0.0.13:22-10.0.0.1:35372.service: Deactivated successfully. Apr 23 00:16:51.451219 systemd[1]: session-46.scope: Deactivated successfully. Apr 23 00:16:51.453699 systemd[1]: session-46.scope: Consumed 4.857s CPU time, 25.8M memory peak. Apr 23 00:16:51.599735 systemd-logind[1615]: Session 46 logged out. Waiting for processes to exit. Apr 23 00:16:51.760074 kubelet[2933]: E0423 00:16:51.738270 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:16:51.846384 systemd-logind[1615]: Removed session 46. Apr 23 00:16:52.545084 kubelet[2933]: E0423 00:16:52.544960 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.09s" Apr 23 00:16:54.116146 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 60354 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:16:54.148695 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:16:54.611743 systemd-logind[1615]: New session 47 of user core. Apr 23 00:16:54.649319 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 23 00:16:58.012579 sshd[5475]: Connection closed by 10.0.0.1 port 60354 Apr 23 00:16:58.025866 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Apr 23 00:16:58.164301 systemd[1]: sshd@45-10.0.0.13:22-10.0.0.1:60354.service: Deactivated successfully. Apr 23 00:16:58.309800 systemd[1]: session-47.scope: Deactivated successfully. Apr 23 00:16:58.317156 systemd[1]: session-47.scope: Consumed 2.106s CPU time, 16.8M memory peak. Apr 23 00:16:58.368246 systemd-logind[1615]: Session 47 logged out. Waiting for processes to exit. Apr 23 00:16:58.545248 systemd-logind[1615]: Removed session 47. Apr 23 00:17:01.783720 containerd[1642]: time="2026-04-23T00:17:01.766778907Z" level=info msg="container event discarded" container=a8554647304ffe78aa55682bb7bf5c5ac5d7e7d1ca8506296ea1906ea2275386 type=CONTAINER_DELETED_EVENT Apr 23 00:17:01.801669 containerd[1642]: time="2026-04-23T00:17:01.783813793Z" level=info msg="container event discarded" container=c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b type=CONTAINER_CREATED_EVENT Apr 23 00:17:03.529111 systemd[1]: Started sshd@46-10.0.0.13:22-10.0.0.1:54766.service - OpenSSH per-connection server daemon (10.0.0.1:54766). Apr 23 00:17:04.087689 kubelet[2933]: E0423 00:17:04.086263 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:05.490664 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 54766 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:05.887124 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:06.797688 systemd-logind[1615]: New session 48 of user core. Apr 23 00:17:07.180322 containerd[1642]: time="2026-04-23T00:17:07.095743070Z" level=info msg="container event discarded" container=6a49ddbae22029ec81dc64caa75577b8bdcde0ae29d4a38d8e41a814528f8d1c type=CONTAINER_STARTED_EVENT Apr 23 00:17:07.186908 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 23 00:17:09.062960 kubelet[2933]: E0423 00:17:09.055385 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.655s" Apr 23 00:17:12.650375 kubelet[2933]: E0423 00:17:12.647213 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.429s" Apr 23 00:17:18.175299 kubelet[2933]: E0423 00:17:18.175084 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.341s" Apr 23 00:17:20.538380 kubelet[2933]: E0423 00:17:20.538164 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.28s" Apr 23 00:17:21.101267 sshd[5532]: Connection closed by 10.0.0.1 port 54766 Apr 23 00:17:21.185066 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:21.482114 systemd[1]: sshd@46-10.0.0.13:22-10.0.0.1:54766.service: Deactivated successfully. Apr 23 00:17:21.558532 systemd[1]: sshd@46-10.0.0.13:22-10.0.0.1:54766.service: Consumed 1.019s CPU time, 4.2M memory peak. Apr 23 00:17:21.765222 systemd[1]: session-48.scope: Deactivated successfully. Apr 23 00:17:21.768115 systemd[1]: session-48.scope: Consumed 7.142s CPU time, 17.7M memory peak. Apr 23 00:17:21.835280 systemd-logind[1615]: Session 48 logged out. Waiting for processes to exit. Apr 23 00:17:21.974178 systemd-logind[1615]: Removed session 48. Apr 23 00:17:26.392277 systemd[1]: Started sshd@47-10.0.0.13:22-10.0.0.1:33798.service - OpenSSH per-connection server daemon (10.0.0.1:33798). Apr 23 00:17:27.711342 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:27.750814 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:27.954639 systemd-logind[1615]: New session 49 of user core. Apr 23 00:17:28.150391 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 23 00:17:29.756194 sshd[5601]: Connection closed by 10.0.0.1 port 33798 Apr 23 00:17:29.758760 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:29.929665 systemd[1]: sshd@47-10.0.0.13:22-10.0.0.1:33798.service: Deactivated successfully. Apr 23 00:17:29.952074 systemd[1]: session-49.scope: Deactivated successfully. Apr 23 00:17:29.955353 systemd[1]: session-49.scope: Consumed 1.151s CPU time, 15.8M memory peak. Apr 23 00:17:29.966982 systemd-logind[1615]: Session 49 logged out. Waiting for processes to exit. Apr 23 00:17:30.050345 systemd-logind[1615]: Removed session 49. Apr 23 00:17:30.752945 containerd[1642]: time="2026-04-23T00:17:30.750377344Z" level=info msg="container event discarded" container=c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b type=CONTAINER_STARTED_EVENT Apr 23 00:17:34.402565 kubelet[2933]: E0423 00:17:34.402226 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:35.355556 systemd[1]: Started sshd@48-10.0.0.13:22-10.0.0.1:33810.service - OpenSSH per-connection server daemon (10.0.0.1:33810). Apr 23 00:17:37.753761 kubelet[2933]: E0423 00:17:37.752253 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:37.885706 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 33810 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:37.944355 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:38.956235 systemd-logind[1615]: New session 50 of user core. Apr 23 00:17:39.117634 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 23 00:17:42.492949 kubelet[2933]: E0423 00:17:42.492770 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.078s" Apr 23 00:17:46.443051 kubelet[2933]: E0423 00:17:46.441184 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.033s" Apr 23 00:17:47.733347 sshd[5661]: Connection closed by 10.0.0.1 port 33810 Apr 23 00:17:47.801765 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:48.293299 systemd[1]: sshd@48-10.0.0.13:22-10.0.0.1:33810.service: Deactivated successfully. Apr 23 00:17:48.368008 systemd[1]: sshd@48-10.0.0.13:22-10.0.0.1:33810.service: Consumed 1.057s CPU time, 4.1M memory peak. Apr 23 00:17:48.378223 systemd[1]: session-50.scope: Deactivated successfully. Apr 23 00:17:48.384987 systemd[1]: session-50.scope: Consumed 4.777s CPU time, 17.8M memory peak. Apr 23 00:17:48.616393 systemd-logind[1615]: Session 50 logged out. Waiting for processes to exit. Apr 23 00:17:49.020779 systemd-logind[1615]: Removed session 50. Apr 23 00:17:49.239674 containerd[1642]: time="2026-04-23T00:17:49.096090024Z" level=info msg="container event discarded" container=c01a9b837792b74f08cecc1eb68b06df46ff5f0bb92f4c4824de943915851f6b type=CONTAINER_STOPPED_EVENT Apr 23 00:17:50.333389 kubelet[2933]: E0423 00:17:50.328925 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.852s" Apr 23 00:17:53.390282 containerd[1642]: time="2026-04-23T00:17:53.369210727Z" level=info msg="container event discarded" container=849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b type=CONTAINER_CREATED_EVENT Apr 23 00:17:54.115301 systemd[1]: Started sshd@49-10.0.0.13:22-10.0.0.1:50392.service - OpenSSH per-connection server daemon (10.0.0.1:50392). Apr 23 00:17:54.572121 kubelet[2933]: E0423 00:17:54.572029 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.171s" Apr 23 00:17:54.578724 kubelet[2933]: E0423 00:17:54.577028 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:17:55.335776 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 50392 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:17:55.344557 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:17:55.484114 systemd-logind[1615]: New session 51 of user core. Apr 23 00:17:55.571940 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 23 00:17:56.673262 containerd[1642]: time="2026-04-23T00:17:56.664179016Z" level=info msg="container event discarded" container=849dfdef3690072d5b505d9b38bcd1891cee46e3a1ead84e76e6e02b3e55b31b type=CONTAINER_STARTED_EVENT Apr 23 00:17:58.014244 sshd[5726]: Connection closed by 10.0.0.1 port 50392 Apr 23 00:17:58.034329 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Apr 23 00:17:58.101900 systemd[1]: sshd@49-10.0.0.13:22-10.0.0.1:50392.service: Deactivated successfully. Apr 23 00:17:58.154099 systemd[1]: session-51.scope: Deactivated successfully. Apr 23 00:17:58.160299 systemd[1]: session-51.scope: Consumed 1.504s CPU time, 15M memory peak. Apr 23 00:17:58.256895 systemd-logind[1615]: Session 51 logged out. Waiting for processes to exit. Apr 23 00:17:58.364325 systemd-logind[1615]: Removed session 51. Apr 23 00:18:03.388294 systemd[1]: Started sshd@50-10.0.0.13:22-10.0.0.1:49504.service - OpenSSH per-connection server daemon (10.0.0.1:49504). Apr 23 00:18:03.465216 kubelet[2933]: E0423 00:18:03.464796 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:06.089995 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 49504 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:06.200188 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:06.802965 systemd-logind[1615]: New session 52 of user core. Apr 23 00:18:06.833177 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 23 00:18:09.469925 sshd[5782]: Connection closed by 10.0.0.1 port 49504 Apr 23 00:18:09.471326 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:09.494388 systemd[1]: sshd@50-10.0.0.13:22-10.0.0.1:49504.service: Deactivated successfully. Apr 23 00:18:09.519754 systemd[1]: sshd@50-10.0.0.13:22-10.0.0.1:49504.service: Consumed 1.216s CPU time, 4.2M memory peak. Apr 23 00:18:09.526919 systemd[1]: session-52.scope: Deactivated successfully. Apr 23 00:18:09.531831 systemd[1]: session-52.scope: Consumed 1.770s CPU time, 18.1M memory peak. Apr 23 00:18:09.544745 systemd-logind[1615]: Session 52 logged out. Waiting for processes to exit. Apr 23 00:18:09.547791 systemd-logind[1615]: Removed session 52. Apr 23 00:18:15.204873 systemd[1]: Started sshd@51-10.0.0.13:22-10.0.0.1:57054.service - OpenSSH per-connection server daemon (10.0.0.1:57054). Apr 23 00:18:18.228756 kubelet[2933]: E0423 00:18:18.158238 2933 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.762s" Apr 23 00:18:18.975173 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:19.040216 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:19.058668 kubelet[2933]: E0423 00:18:19.054796 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:18:19.508304 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 23 00:18:19.511397 systemd-logind[1615]: New session 53 of user core. Apr 23 00:18:23.203245 sshd[5845]: Connection closed by 10.0.0.1 port 57054 Apr 23 00:18:23.263910 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:23.505364 systemd[1]: sshd@51-10.0.0.13:22-10.0.0.1:57054.service: Deactivated successfully. Apr 23 00:18:23.549758 systemd[1]: sshd@51-10.0.0.13:22-10.0.0.1:57054.service: Consumed 1.216s CPU time, 4M memory peak. Apr 23 00:18:23.844364 systemd[1]: session-53.scope: Deactivated successfully. Apr 23 00:18:23.941899 systemd[1]: session-53.scope: Consumed 2.016s CPU time, 16.2M memory peak. Apr 23 00:18:24.130367 systemd-logind[1615]: Session 53 logged out. Waiting for processes to exit. Apr 23 00:18:24.157345 systemd-logind[1615]: Removed session 53. Apr 23 00:18:28.114213 containerd[1642]: time="2026-04-23T00:18:28.113368721Z" level=info msg="container event discarded" container=1564e278d6a6956832649901a0b1f10fd599957e5299d50711f37769930cfa65 type=CONTAINER_STOPPED_EVENT Apr 23 00:18:28.256320 systemd[1]: Started sshd@52-10.0.0.13:22-10.0.0.1:36040.service - OpenSSH per-connection server daemon (10.0.0.1:36040). Apr 23 00:18:28.717126 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 36040 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:28.726086 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:28.755314 systemd-logind[1615]: New session 54 of user core. Apr 23 00:18:28.773750 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 23 00:18:29.954561 containerd[1642]: time="2026-04-23T00:18:29.950274859Z" level=info msg="container event discarded" container=aa65e082525b3b0b7bf7527d9569dac0995c549604cc317744f8b02a34bb9b4a type=CONTAINER_DELETED_EVENT Apr 23 00:18:30.309369 containerd[1642]: time="2026-04-23T00:18:30.300343666Z" level=info msg="container event discarded" container=7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0 type=CONTAINER_CREATED_EVENT Apr 23 00:18:30.829928 sshd[5890]: Connection closed by 10.0.0.1 port 36040 Apr 23 00:18:30.839277 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:31.007056 systemd[1]: sshd@52-10.0.0.13:22-10.0.0.1:36040.service: Deactivated successfully. Apr 23 00:18:31.056096 systemd[1]: session-54.scope: Deactivated successfully. Apr 23 00:18:31.083397 systemd[1]: session-54.scope: Consumed 1.208s CPU time, 16.2M memory peak. Apr 23 00:18:31.152341 systemd-logind[1615]: Session 54 logged out. Waiting for processes to exit. Apr 23 00:18:31.156130 systemd-logind[1615]: Removed session 54. Apr 23 00:18:33.266225 containerd[1642]: time="2026-04-23T00:18:33.264094665Z" level=info msg="container event discarded" container=7b49e3ef743a01963761d8d986343acef6a60f3119e736af8381a40ebdce89f0 type=CONTAINER_STARTED_EVENT Apr 23 00:18:36.147090 systemd[1]: Started sshd@53-10.0.0.13:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Apr 23 00:18:36.715998 sshd[5926]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:lSi0kxX0P1+qZO4URDxK1AzUrhv+Frin8qr6HM6g7UE Apr 23 00:18:36.734362 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 00:18:36.950087 systemd-logind[1615]: New session 55 of user core. Apr 23 00:18:36.997359 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 23 00:18:38.104973 sshd[5944]: Connection closed by 10.0.0.1 port 38740 Apr 23 00:18:38.112393 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Apr 23 00:18:38.155113 systemd[1]: sshd@53-10.0.0.13:22-10.0.0.1:38740.service: Deactivated successfully. Apr 23 00:18:38.161242 systemd[1]: session-55.scope: Deactivated successfully. Apr 23 00:18:38.170300 systemd-logind[1615]: Session 55 logged out. Waiting for processes to exit. Apr 23 00:18:38.173212 systemd-logind[1615]: Removed session 55.