Apr 16 02:09:05.506762 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 02:09:05.506797 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:09:05.506810 kernel: BIOS-provided physical RAM map: Apr 16 02:09:05.506818 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 02:09:05.506826 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 02:09:05.506833 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 02:09:05.506842 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 02:09:05.506850 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 02:09:05.506858 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 02:09:05.506865 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 02:09:05.506873 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 02:09:05.506884 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 02:09:05.506892 kernel: NX (Execute Disable) protection: active Apr 16 02:09:05.506900 kernel: APIC: Static calls initialized Apr 16 02:09:05.506910 kernel: SMBIOS 2.8 present. Apr 16 02:09:05.506920 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 02:09:05.506931 kernel: DMI: Memory slots populated: 1/1 Apr 16 02:09:05.506939 kernel: Hypervisor detected: KVM Apr 16 02:09:05.506948 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 02:09:05.506957 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 02:09:05.506965 kernel: kvm-clock: using sched offset of 8097105406 cycles Apr 16 02:09:05.506975 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 02:09:05.506984 kernel: tsc: Detected 2793.438 MHz processor Apr 16 02:09:05.506994 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 02:09:05.507003 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 02:09:05.507012 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 02:09:05.507023 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 02:09:05.507032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 02:09:05.507040 kernel: Using GB pages for direct mapping Apr 16 02:09:05.507047 kernel: ACPI: Early table checksum verification disabled Apr 16 02:09:05.507055 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 02:09:05.507063 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507072 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507080 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507089 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 02:09:05.507100 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507109 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507119 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507127 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:09:05.507137 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 02:09:05.507150 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 02:09:05.507161 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 02:09:05.507171 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 02:09:05.507181 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 02:09:05.507190 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 02:09:05.507199 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 02:09:05.507208 kernel: No NUMA configuration found Apr 16 02:09:05.507217 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 02:09:05.507226 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 16 02:09:05.507238 kernel: Zone ranges: Apr 16 02:09:05.507246 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 02:09:05.507256 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 02:09:05.507264 kernel: Normal empty Apr 16 02:09:05.507273 kernel: Device empty Apr 16 02:09:05.507281 kernel: Movable zone start for each node Apr 16 02:09:05.507291 kernel: Early memory node ranges Apr 16 02:09:05.507301 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 02:09:05.507310 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 02:09:05.507321 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 02:09:05.507329 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:09:05.507338 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 02:09:05.507348 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 02:09:05.507358 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 02:09:05.507368 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 02:09:05.507378 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 02:09:05.507388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 02:09:05.507398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 02:09:05.507410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 02:09:05.507420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 02:09:05.507430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 02:09:05.507440 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 02:09:05.507450 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 02:09:05.507459 kernel: TSC deadline timer available Apr 16 02:09:05.507469 kernel: CPU topo: Max. logical packages: 1 Apr 16 02:09:05.507479 kernel: CPU topo: Max. logical dies: 1 Apr 16 02:09:05.507488 kernel: CPU topo: Max. dies per package: 1 Apr 16 02:09:05.507497 kernel: CPU topo: Max. threads per core: 1 Apr 16 02:09:05.507509 kernel: CPU topo: Num. cores per package: 4 Apr 16 02:09:05.507519 kernel: CPU topo: Num. threads per package: 4 Apr 16 02:09:05.507530 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 02:09:05.507539 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 02:09:05.507549 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 02:09:05.507559 kernel: kvm-guest: setup PV sched yield Apr 16 02:09:05.507570 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 02:09:05.507579 kernel: Booting paravirtualized kernel on KVM Apr 16 02:09:05.507589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 02:09:05.507603 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 02:09:05.507613 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 02:09:05.507623 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 02:09:05.507633 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 02:09:05.507643 kernel: kvm-guest: PV spinlocks enabled Apr 16 02:09:05.507653 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 02:09:05.509773 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:09:05.509811 kernel: random: crng init done Apr 16 02:09:05.509826 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 02:09:05.509836 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 02:09:05.509844 kernel: Fallback order for Node 0: 0 Apr 16 02:09:05.509854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 16 02:09:05.509864 kernel: Policy zone: DMA32 Apr 16 02:09:05.509874 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 02:09:05.509884 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 02:09:05.509894 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 02:09:05.509904 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 02:09:05.509916 kernel: Dynamic Preempt: voluntary Apr 16 02:09:05.509926 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 02:09:05.509936 kernel: rcu: RCU event tracing is enabled. Apr 16 02:09:05.509946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 02:09:05.509956 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 02:09:05.509966 kernel: Rude variant of Tasks RCU enabled. Apr 16 02:09:05.509976 kernel: Tracing variant of Tasks RCU enabled. Apr 16 02:09:05.509986 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 02:09:05.509997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 02:09:05.510009 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:09:05.510019 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:09:05.510029 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:09:05.510039 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 02:09:05.510049 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 02:09:05.510057 kernel: Console: colour VGA+ 80x25 Apr 16 02:09:05.510073 kernel: printk: legacy console [ttyS0] enabled Apr 16 02:09:05.510086 kernel: ACPI: Core revision 20240827 Apr 16 02:09:05.510097 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 02:09:05.510108 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 02:09:05.510119 kernel: x2apic enabled Apr 16 02:09:05.510129 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 02:09:05.510142 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 02:09:05.510153 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 02:09:05.510165 kernel: kvm-guest: setup PV IPIs Apr 16 02:09:05.510176 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 02:09:05.510187 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:09:05.510200 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 02:09:05.510211 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 02:09:05.510221 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 02:09:05.510232 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 02:09:05.510243 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 02:09:05.510253 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 02:09:05.510264 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 02:09:05.510275 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 02:09:05.510288 kernel: RETBleed: Vulnerable Apr 16 02:09:05.510299 kernel: Speculative Store Bypass: Vulnerable Apr 16 02:09:05.510310 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 02:09:05.510320 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 02:09:05.510331 kernel: active return thunk: its_return_thunk Apr 16 02:09:05.510342 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 02:09:05.510353 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 02:09:05.510363 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 02:09:05.510374 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 02:09:05.510387 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 02:09:05.510398 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 02:09:05.510408 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 02:09:05.510419 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 02:09:05.510430 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 02:09:05.510440 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 02:09:05.510451 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 02:09:05.510462 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 02:09:05.510472 kernel: Freeing SMP alternatives memory: 32K Apr 16 02:09:05.510485 kernel: pid_max: default: 32768 minimum: 301 Apr 16 02:09:05.510496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 02:09:05.510507 kernel: landlock: Up and running. Apr 16 02:09:05.510518 kernel: SELinux: Initializing. Apr 16 02:09:05.510528 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:09:05.510539 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:09:05.510550 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 02:09:05.510561 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 02:09:05.510571 kernel: signal: max sigframe size: 3632 Apr 16 02:09:05.510584 kernel: rcu: Hierarchical SRCU implementation. Apr 16 02:09:05.510594 kernel: rcu: Max phase no-delay instances is 400. Apr 16 02:09:05.510605 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 02:09:05.510616 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 02:09:05.510626 kernel: smp: Bringing up secondary CPUs ... Apr 16 02:09:05.510637 kernel: smpboot: x86: Booting SMP configuration: Apr 16 02:09:05.510648 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 02:09:05.510658 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 02:09:05.510729 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 02:09:05.510744 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 146108K reserved, 0K cma-reserved) Apr 16 02:09:05.510754 kernel: devtmpfs: initialized Apr 16 02:09:05.510764 kernel: x86/mm: Memory block size: 128MB Apr 16 02:09:05.510774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 02:09:05.510784 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 02:09:05.510795 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 02:09:05.510805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 02:09:05.510815 kernel: audit: initializing netlink subsys (disabled) Apr 16 02:09:05.510824 kernel: audit: type=2000 audit(1776305338.650:1): state=initialized audit_enabled=0 res=1 Apr 16 02:09:05.510835 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 02:09:05.510845 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 02:09:05.510854 kernel: cpuidle: using governor menu Apr 16 02:09:05.510863 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 02:09:05.510874 kernel: dca service started, version 1.12.1 Apr 16 02:09:05.510885 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 16 02:09:05.510896 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 02:09:05.510907 kernel: PCI: Using configuration type 1 for base access Apr 16 02:09:05.510916 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 02:09:05.510929 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 02:09:05.510939 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 02:09:05.510950 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 02:09:05.510960 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 02:09:05.510970 kernel: ACPI: Added _OSI(Module Device) Apr 16 02:09:05.510981 kernel: ACPI: Added _OSI(Processor Device) Apr 16 02:09:05.510992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 02:09:05.511002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 02:09:05.511012 kernel: ACPI: Interpreter enabled Apr 16 02:09:05.511024 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 02:09:05.511035 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 02:09:05.511046 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 02:09:05.511056 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 02:09:05.511068 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 02:09:05.511077 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 02:09:05.511299 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 02:09:05.511394 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 02:09:05.511484 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 02:09:05.511497 kernel: PCI host bridge to bus 0000:00 Apr 16 02:09:05.511591 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 02:09:05.511795 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 02:09:05.511896 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 02:09:05.511972 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 02:09:05.512045 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 02:09:05.512120 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 02:09:05.512194 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 02:09:05.512300 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 02:09:05.512395 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 02:09:05.512480 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 16 02:09:05.512564 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 16 02:09:05.512651 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 16 02:09:05.513041 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 02:09:05.513139 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 02:09:05.513218 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 16 02:09:05.513296 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 16 02:09:05.513372 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 02:09:05.513459 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 02:09:05.513544 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 16 02:09:05.513623 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 16 02:09:05.513894 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 02:09:05.513991 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 02:09:05.514074 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 16 02:09:05.514160 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 16 02:09:05.514247 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 02:09:05.514337 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 16 02:09:05.514438 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 02:09:05.514525 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 02:09:05.514621 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 02:09:05.514811 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 16 02:09:05.514892 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 16 02:09:05.514974 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 02:09:05.515056 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 16 02:09:05.515067 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 02:09:05.515077 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 02:09:05.515087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 02:09:05.515096 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 02:09:05.515106 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 02:09:05.515117 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 02:09:05.515126 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 02:09:05.515137 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 02:09:05.515147 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 02:09:05.515158 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 02:09:05.515168 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 02:09:05.515178 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 02:09:05.515187 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 02:09:05.515196 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 02:09:05.515205 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 02:09:05.515214 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 02:09:05.515226 kernel: iommu: Default domain type: Translated Apr 16 02:09:05.515235 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 02:09:05.515244 kernel: PCI: Using ACPI for IRQ routing Apr 16 02:09:05.515253 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 02:09:05.515263 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 02:09:05.515273 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 02:09:05.515354 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 02:09:05.515431 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 02:09:05.515499 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 02:09:05.515509 kernel: vgaarb: loaded Apr 16 02:09:05.515518 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 02:09:05.515526 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 02:09:05.515535 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 02:09:05.515543 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 02:09:05.515551 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 02:09:05.515560 kernel: pnp: PnP ACPI init Apr 16 02:09:05.515635 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 02:09:05.515649 kernel: pnp: PnP ACPI: found 6 devices Apr 16 02:09:05.515658 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 02:09:05.515762 kernel: NET: Registered PF_INET protocol family Apr 16 02:09:05.515772 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 02:09:05.515781 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 02:09:05.515789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 02:09:05.515798 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 02:09:05.515806 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 02:09:05.515817 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 02:09:05.515826 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:09:05.515834 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:09:05.515843 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 02:09:05.515851 kernel: NET: Registered PF_XDP protocol family Apr 16 02:09:05.515925 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 02:09:05.515985 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 02:09:05.516057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 02:09:05.516126 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 02:09:05.516201 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 02:09:05.516278 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 02:09:05.516291 kernel: PCI: CLS 0 bytes, default 64 Apr 16 02:09:05.516302 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 02:09:05.516313 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:09:05.516324 kernel: Initialise system trusted keyrings Apr 16 02:09:05.516334 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 02:09:05.516345 kernel: Key type asymmetric registered Apr 16 02:09:05.516357 kernel: Asymmetric key parser 'x509' registered Apr 16 02:09:05.516368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 02:09:05.516378 kernel: io scheduler mq-deadline registered Apr 16 02:09:05.516388 kernel: io scheduler kyber registered Apr 16 02:09:05.516398 kernel: io scheduler bfq registered Apr 16 02:09:05.516408 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 02:09:05.516419 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 02:09:05.516430 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 02:09:05.516441 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 02:09:05.516453 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 02:09:05.516463 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 02:09:05.516473 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 02:09:05.516484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 02:09:05.516494 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 02:09:05.516585 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 02:09:05.516600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 02:09:05.520062 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 02:09:05.520218 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T02:09:04 UTC (1776305344) Apr 16 02:09:05.520305 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 02:09:05.520320 kernel: intel_pstate: CPU model not supported Apr 16 02:09:05.520331 kernel: NET: Registered PF_INET6 protocol family Apr 16 02:09:05.520342 kernel: Segment Routing with IPv6 Apr 16 02:09:05.520353 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 02:09:05.520364 kernel: NET: Registered PF_PACKET protocol family Apr 16 02:09:05.520375 kernel: Key type dns_resolver registered Apr 16 02:09:05.520386 kernel: IPI shorthand broadcast: enabled Apr 16 02:09:05.520400 kernel: sched_clock: Marking stable (5838045101, 395163562)->(6707701934, -474493271) Apr 16 02:09:05.520411 kernel: registered taskstats version 1 Apr 16 02:09:05.520421 kernel: Loading compiled-in X.509 certificates Apr 16 02:09:05.520432 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 02:09:05.520442 kernel: Demotion targets for Node 0: null Apr 16 02:09:05.520452 kernel: Key type .fscrypt registered Apr 16 02:09:05.520461 kernel: Key type fscrypt-provisioning registered Apr 16 02:09:05.520471 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 02:09:05.520480 kernel: ima: Allocated hash algorithm: sha1 Apr 16 02:09:05.520493 kernel: ima: No architecture policies found Apr 16 02:09:05.520502 kernel: clk: Disabling unused clocks Apr 16 02:09:05.520510 kernel: Warning: unable to open an initial console. Apr 16 02:09:05.520518 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 02:09:05.520526 kernel: Write protecting the kernel read-only data: 40960k Apr 16 02:09:05.520535 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 02:09:05.520543 kernel: Run /init as init process Apr 16 02:09:05.520551 kernel: with arguments: Apr 16 02:09:05.520560 kernel: /init Apr 16 02:09:05.520569 kernel: with environment: Apr 16 02:09:05.520579 kernel: HOME=/ Apr 16 02:09:05.520587 kernel: TERM=linux Apr 16 02:09:05.520598 systemd[1]: Successfully made /usr/ read-only. Apr 16 02:09:05.520613 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:09:05.520623 systemd[1]: Detected virtualization kvm. Apr 16 02:09:05.520632 systemd[1]: Detected architecture x86-64. Apr 16 02:09:05.520749 systemd[1]: Running in initrd. Apr 16 02:09:05.520762 systemd[1]: No hostname configured, using default hostname. Apr 16 02:09:05.520772 systemd[1]: Hostname set to . Apr 16 02:09:05.520781 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:09:05.520790 systemd[1]: Queued start job for default target initrd.target. Apr 16 02:09:05.520800 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:09:05.520809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:09:05.520822 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 02:09:05.520831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:09:05.520840 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 02:09:05.520850 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 02:09:05.520864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 02:09:05.520876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 02:09:05.520887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:09:05.520901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:09:05.520912 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:09:05.520923 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:09:05.520935 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:09:05.520946 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:09:05.520956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:09:05.520968 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:09:05.520979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 02:09:05.520992 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 02:09:05.521004 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:09:05.521015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:09:05.521026 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:09:05.521038 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:09:05.521050 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 02:09:05.521065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:09:05.521076 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 02:09:05.521088 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 02:09:05.521100 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 02:09:05.521112 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:09:05.521123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:09:05.521135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:09:05.521147 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 02:09:05.521198 systemd-journald[204]: Collecting audit messages is disabled. Apr 16 02:09:05.521229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:09:05.521243 systemd-journald[204]: Journal started Apr 16 02:09:05.521270 systemd-journald[204]: Runtime Journal (/run/log/journal/706dcde4b6404e2fbd9d64296867ff37) is 6M, max 48.2M, 42.2M free. Apr 16 02:09:05.532250 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:09:05.536324 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 02:09:05.549121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 02:09:05.567188 systemd-modules-load[206]: Inserted module 'overlay' Apr 16 02:09:05.579860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:09:05.787490 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 02:09:05.787539 kernel: Bridge firewalling registered Apr 16 02:09:05.629199 systemd-modules-load[206]: Inserted module 'br_netfilter' Apr 16 02:09:05.801872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:09:05.811914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:09:05.822441 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 02:09:05.835940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:09:05.860880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:09:05.887228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:09:05.916409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:09:05.920942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:09:05.966074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:09:05.974430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:09:05.985418 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:09:06.004654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 02:09:06.022380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:09:06.075973 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:09:06.258262 systemd-resolved[244]: Positive Trust Anchors: Apr 16 02:09:06.258298 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:09:06.258335 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:09:06.272030 systemd-resolved[244]: Defaulting to hostname 'linux'. Apr 16 02:09:06.273394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:09:06.331349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:09:06.596780 kernel: SCSI subsystem initialized Apr 16 02:09:06.628792 kernel: Loading iSCSI transport class v2.0-870. Apr 16 02:09:06.670062 kernel: iscsi: registered transport (tcp) Apr 16 02:09:06.707237 kernel: iscsi: registered transport (qla4xxx) Apr 16 02:09:06.707316 kernel: QLogic iSCSI HBA Driver Apr 16 02:09:06.802454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:09:06.846347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:09:06.864409 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:09:07.065813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 02:09:07.079997 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 02:09:07.244917 kernel: raid6: avx512x4 gen() 17972 MB/s Apr 16 02:09:07.263338 kernel: raid6: avx512x2 gen() 29600 MB/s Apr 16 02:09:07.281345 kernel: raid6: avx512x1 gen() 25041 MB/s Apr 16 02:09:07.299328 kernel: raid6: avx2x4 gen() 15990 MB/s Apr 16 02:09:07.317353 kernel: raid6: avx2x2 gen() 15251 MB/s Apr 16 02:09:07.337937 kernel: raid6: avx2x1 gen() 12418 MB/s Apr 16 02:09:07.338042 kernel: raid6: using algorithm avx512x2 gen() 29600 MB/s Apr 16 02:09:07.355056 kernel: raid6: .... xor() 18511 MB/s, rmw enabled Apr 16 02:09:07.355165 kernel: raid6: using avx512x2 recovery algorithm Apr 16 02:09:07.383637 kernel: xor: automatically using best checksumming function avx Apr 16 02:09:07.814269 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 02:09:07.880727 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:09:07.893471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:09:08.002060 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 16 02:09:08.014516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:09:08.032276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 02:09:08.188607 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Apr 16 02:09:08.381493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:09:08.399483 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:09:08.519431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:09:08.545956 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 02:09:08.782272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:09:08.782454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:09:08.820439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:09:08.828011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:09:08.828644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:09:08.921194 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 02:09:08.931607 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 02:09:08.946743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 02:09:08.946834 kernel: GPT:9289727 != 19775487 Apr 16 02:09:08.946848 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 02:09:08.946861 kernel: GPT:9289727 != 19775487 Apr 16 02:09:08.946872 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 02:09:08.946886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:09:09.047920 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 02:09:09.137902 kernel: libata version 3.00 loaded. Apr 16 02:09:09.137977 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 02:09:09.264963 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 02:09:09.265163 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 02:09:09.265176 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 02:09:09.265251 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 02:09:09.265317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 02:09:09.270426 kernel: AES CTR mode by8 optimization enabled Apr 16 02:09:09.278791 kernel: scsi host0: ahci Apr 16 02:09:09.288846 kernel: scsi host1: ahci Apr 16 02:09:09.306707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:09:09.332826 kernel: scsi host2: ahci Apr 16 02:09:09.344111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:09:09.390782 kernel: scsi host3: ahci Apr 16 02:09:09.392552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 02:09:09.410635 kernel: scsi host4: ahci Apr 16 02:09:09.422249 kernel: scsi host5: ahci Apr 16 02:09:09.430977 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 16 02:09:09.431077 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 16 02:09:09.434225 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 16 02:09:09.437506 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 16 02:09:09.444446 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 16 02:09:09.444536 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 16 02:09:09.446448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 02:09:09.464213 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 02:09:09.468305 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 02:09:09.510778 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 02:09:09.546652 disk-uuid[647]: Primary Header is updated. Apr 16 02:09:09.546652 disk-uuid[647]: Secondary Entries is updated. Apr 16 02:09:09.546652 disk-uuid[647]: Secondary Header is updated. Apr 16 02:09:09.557787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:09:09.780813 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 02:09:09.780882 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 02:09:09.780896 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 02:09:09.810774 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 02:09:09.816790 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 02:09:09.816863 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 02:09:09.824068 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:09:09.826905 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 02:09:09.826987 kernel: ata3.00: applying bridge limits Apr 16 02:09:09.833108 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:09:09.833193 kernel: ata3.00: configured for UDMA/100 Apr 16 02:09:09.853419 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 02:09:09.952392 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 02:09:09.954575 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 02:09:09.983884 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 02:09:10.532577 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 02:09:10.567870 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:09:10.610331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:09:10.631521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:09:10.622086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:09:10.635490 disk-uuid[648]: The operation has completed successfully. Apr 16 02:09:10.628039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 02:09:10.702507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:09:10.753398 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 02:09:10.756897 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 02:09:10.875889 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 02:09:10.974014 sh[676]: Success Apr 16 02:09:11.098383 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 02:09:11.098473 kernel: device-mapper: uevent: version 1.0.3 Apr 16 02:09:11.104715 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 02:09:11.234153 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 02:09:11.313510 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 02:09:11.335514 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 02:09:11.341245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 02:09:11.419867 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (688) Apr 16 02:09:11.426500 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 02:09:11.426606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:09:11.483566 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 02:09:11.483756 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 02:09:11.517576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 02:09:11.528126 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:09:11.535058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 02:09:11.540153 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 02:09:11.548925 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 02:09:11.691989 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (725) Apr 16 02:09:11.705065 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:09:11.705152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:09:11.726364 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:09:11.726450 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:09:11.743794 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:09:11.756525 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 02:09:11.770857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 02:09:12.111484 ignition[782]: Ignition 2.22.0 Apr 16 02:09:12.111710 ignition[782]: Stage: fetch-offline Apr 16 02:09:12.111786 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:12.111795 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:12.111898 ignition[782]: parsed url from cmdline: "" Apr 16 02:09:12.111901 ignition[782]: no config URL provided Apr 16 02:09:12.111907 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 02:09:12.111913 ignition[782]: no config at "/usr/lib/ignition/user.ign" Apr 16 02:09:12.111937 ignition[782]: op(1): [started] loading QEMU firmware config module Apr 16 02:09:12.111943 ignition[782]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 02:09:12.140889 ignition[782]: op(1): [finished] loading QEMU firmware config module Apr 16 02:09:12.156466 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:09:12.167208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:09:12.324496 systemd-networkd[866]: lo: Link UP Apr 16 02:09:12.324551 systemd-networkd[866]: lo: Gained carrier Apr 16 02:09:12.330933 systemd-networkd[866]: Enumeration completed Apr 16 02:09:12.331861 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:09:12.335125 systemd[1]: Reached target network.target - Network. Apr 16 02:09:12.339042 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:09:12.339049 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:09:12.340839 systemd-networkd[866]: eth0: Link UP Apr 16 02:09:12.340964 systemd-networkd[866]: eth0: Gained carrier Apr 16 02:09:12.340979 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:09:12.456095 systemd-networkd[866]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:09:12.536209 ignition[782]: parsing config with SHA512: 16440cc0eb8ba9ea520d2b06ed99954d26c5a30bba5f882bb8a366476758221528b00b9af80faa8804812d5665bda5d93fc64befcc4b9598e11ff14731e2cb22 Apr 16 02:09:12.633315 unknown[782]: fetched base config from "system" Apr 16 02:09:12.636157 unknown[782]: fetched user config from "qemu" Apr 16 02:09:12.641282 ignition[782]: fetch-offline: fetch-offline passed Apr 16 02:09:12.641475 ignition[782]: Ignition finished successfully Apr 16 02:09:12.648364 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:09:12.659884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 02:09:12.671117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 02:09:12.787603 ignition[871]: Ignition 2.22.0 Apr 16 02:09:12.789084 ignition[871]: Stage: kargs Apr 16 02:09:12.789308 ignition[871]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:12.789318 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:12.797814 ignition[871]: kargs: kargs passed Apr 16 02:09:12.797916 ignition[871]: Ignition finished successfully Apr 16 02:09:12.832083 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 02:09:12.841474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 02:09:12.959161 ignition[879]: Ignition 2.22.0 Apr 16 02:09:12.960523 ignition[879]: Stage: disks Apr 16 02:09:12.960842 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:12.960851 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:12.962355 ignition[879]: disks: disks passed Apr 16 02:09:12.962423 ignition[879]: Ignition finished successfully Apr 16 02:09:12.986991 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 02:09:13.011088 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 02:09:13.019230 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 02:09:13.024034 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:09:13.034502 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:09:13.038145 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:09:13.060124 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 02:09:13.165160 systemd-fsck[889]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 02:09:13.183584 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 02:09:13.229984 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 02:09:13.671006 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 02:09:13.671578 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 02:09:13.674269 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 02:09:13.697939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:09:13.706224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 02:09:13.707455 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 02:09:13.707512 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 02:09:13.707544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:09:13.762355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 16 02:09:13.770762 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:09:13.770826 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:09:13.771404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 02:09:13.781248 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 02:09:13.824788 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:09:13.824880 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:09:13.839608 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:09:13.961658 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 02:09:14.039386 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Apr 16 02:09:14.057627 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 02:09:14.075937 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 02:09:14.141282 systemd-networkd[866]: eth0: Gained IPv6LL Apr 16 02:09:14.616813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 02:09:14.625497 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 02:09:14.637266 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 02:09:14.678190 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 02:09:14.690919 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:09:14.785151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 02:09:14.812160 ignition[1010]: INFO : Ignition 2.22.0 Apr 16 02:09:14.817993 ignition[1010]: INFO : Stage: mount Apr 16 02:09:14.817993 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:14.817993 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:14.832561 ignition[1010]: INFO : mount: mount passed Apr 16 02:09:14.832561 ignition[1010]: INFO : Ignition finished successfully Apr 16 02:09:14.824781 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 02:09:14.840884 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 02:09:14.913581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:09:14.991784 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Apr 16 02:09:15.004839 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:09:15.004925 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:09:15.028015 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:09:15.028104 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:09:15.037850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:09:15.142390 ignition[1041]: INFO : Ignition 2.22.0 Apr 16 02:09:15.142390 ignition[1041]: INFO : Stage: files Apr 16 02:09:15.142390 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:15.142390 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:15.142390 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Apr 16 02:09:15.171881 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 02:09:15.171881 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 02:09:15.246976 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 02:09:15.259102 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 02:09:15.267585 unknown[1041]: wrote ssh authorized keys file for user: core Apr 16 02:09:15.274862 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 02:09:15.294053 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:09:15.306044 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 02:09:15.407572 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 02:09:15.578480 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:09:15.578480 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 02:09:15.595272 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 16 02:09:15.875557 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 02:09:15.945055 kernel: hrtimer: interrupt took 26205545 ns Apr 16 02:09:16.136997 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 02:09:16.136997 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 02:09:16.136997 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 02:09:16.136997 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:09:16.136997 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:09:16.171472 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 02:09:16.435400 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 02:09:17.669312 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:09:17.669312 ignition[1041]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 16 02:09:17.708622 ignition[1041]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 02:09:17.827233 ignition[1041]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:09:17.838936 ignition[1041]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:09:17.848048 ignition[1041]: INFO : files: files passed Apr 16 02:09:17.848048 ignition[1041]: INFO : Ignition finished successfully Apr 16 02:09:17.907234 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 02:09:17.946554 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 02:09:17.958178 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 02:09:18.012114 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 02:09:18.012218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 02:09:18.036515 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 02:09:18.044590 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:09:18.044590 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:09:18.053184 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:09:18.047384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:09:18.065956 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 02:09:18.076163 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 02:09:18.355983 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 02:09:18.356162 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 02:09:18.376452 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 02:09:18.379252 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 02:09:18.379558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 02:09:18.412163 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 02:09:18.479382 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:09:18.554402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 02:09:18.603811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:09:18.614456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:09:18.627325 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 02:09:18.633923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 02:09:18.636525 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:09:18.664708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 02:09:18.672589 systemd[1]: Stopped target basic.target - Basic System. Apr 16 02:09:18.753651 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 02:09:18.762135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:09:18.768300 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 02:09:18.768510 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:09:18.769417 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 02:09:18.794351 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:09:18.799958 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 02:09:18.807314 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 02:09:18.814259 systemd[1]: Stopped target swap.target - Swaps. Apr 16 02:09:18.817385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 02:09:18.817575 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:09:18.829581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:09:18.830023 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:09:18.845551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 02:09:18.851366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:09:18.876495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 02:09:18.876796 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 02:09:18.961496 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 02:09:18.961784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:09:18.980442 systemd[1]: Stopped target paths.target - Path Units. Apr 16 02:09:18.989279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 02:09:18.993028 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:09:19.015291 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 02:09:19.024434 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 02:09:19.026648 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 02:09:19.029037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:09:19.034260 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 02:09:19.034395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:09:19.034604 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 02:09:19.034802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:09:19.034957 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 02:09:19.035250 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 02:09:19.037798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 02:09:19.051531 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 02:09:19.051840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:09:19.149647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 02:09:19.222578 ignition[1095]: INFO : Ignition 2.22.0 Apr 16 02:09:19.222578 ignition[1095]: INFO : Stage: umount Apr 16 02:09:19.222578 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:09:19.222578 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:09:19.222578 ignition[1095]: INFO : umount: umount passed Apr 16 02:09:19.222578 ignition[1095]: INFO : Ignition finished successfully Apr 16 02:09:19.180095 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 02:09:19.180381 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:09:19.180630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 02:09:19.180866 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:09:19.196335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 02:09:19.196437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 02:09:19.213120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 02:09:19.223284 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 02:09:19.223440 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 02:09:19.228196 systemd[1]: Stopped target network.target - Network. Apr 16 02:09:19.234333 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 02:09:19.234439 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 02:09:19.234533 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 02:09:19.234563 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 02:09:19.234608 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 02:09:19.234639 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 02:09:19.234871 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 02:09:19.234983 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 02:09:19.355633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 02:09:19.371571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 02:09:19.378354 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 02:09:19.378464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 02:09:19.387990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 02:09:19.388117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 02:09:19.425541 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 02:09:19.425808 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 02:09:19.447080 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 02:09:19.447422 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 02:09:19.447547 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 02:09:19.568937 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 02:09:19.576784 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 02:09:19.589378 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 02:09:19.589444 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:09:19.604296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 02:09:19.610133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 02:09:19.610240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:09:19.619290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:09:19.619378 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:09:19.622553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 02:09:19.622635 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 02:09:19.681115 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 02:09:19.681201 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:09:19.748251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:09:19.764166 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 02:09:19.764252 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:09:19.802035 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 02:09:19.802212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:09:19.814891 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 02:09:19.815066 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 02:09:19.834250 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 02:09:19.834343 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 02:09:19.841133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 02:09:19.841194 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:09:19.852047 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 02:09:19.852187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:09:19.864794 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 02:09:19.864928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 02:09:19.872987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 02:09:19.873085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:09:19.928426 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 02:09:19.937132 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 02:09:19.937247 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:09:19.947239 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 02:09:19.947326 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:09:19.958752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:09:19.958851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:09:19.973440 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 02:09:19.973517 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 02:09:19.973558 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:09:19.974200 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 02:09:19.974611 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 02:09:19.977660 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 02:09:20.012094 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 02:09:20.109161 systemd[1]: Switching root. Apr 16 02:09:20.179253 systemd-journald[204]: Journal stopped Apr 16 02:09:23.875051 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Apr 16 02:09:23.875161 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 02:09:23.875182 kernel: SELinux: policy capability open_perms=1 Apr 16 02:09:23.875195 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 02:09:23.875207 kernel: SELinux: policy capability always_check_network=0 Apr 16 02:09:23.877537 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 02:09:23.877556 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 02:09:23.877574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 02:09:23.877587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 02:09:23.877599 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 02:09:23.877612 kernel: audit: type=1403 audit(1776305360.607:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 02:09:23.877628 systemd[1]: Successfully loaded SELinux policy in 174.255ms. Apr 16 02:09:23.877658 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.975ms. Apr 16 02:09:23.877885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:09:23.877908 systemd[1]: Detected virtualization kvm. Apr 16 02:09:23.877920 systemd[1]: Detected architecture x86-64. Apr 16 02:09:23.877933 systemd[1]: Detected first boot. Apr 16 02:09:23.877945 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:09:23.877958 zram_generator::config[1141]: No configuration found. Apr 16 02:09:23.877972 kernel: Guest personality initialized and is inactive Apr 16 02:09:23.877986 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 02:09:23.877998 kernel: Initialized host personality Apr 16 02:09:23.878011 kernel: NET: Registered PF_VSOCK protocol family Apr 16 02:09:23.878032 systemd[1]: Populated /etc with preset unit settings. Apr 16 02:09:23.878048 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 02:09:23.878061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 02:09:23.878073 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 02:09:23.878091 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 02:09:23.878104 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 02:09:23.878118 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 02:09:23.878132 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 02:09:23.878146 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 02:09:23.878158 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 02:09:23.878170 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 02:09:23.878182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 02:09:23.878195 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 02:09:23.878207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:09:23.878220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:09:23.878232 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 02:09:23.878245 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 02:09:23.878262 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 02:09:23.878275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:09:23.878290 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 02:09:23.878303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:09:23.878318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:09:23.878331 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 02:09:23.878344 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 02:09:23.878358 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 02:09:23.878373 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 02:09:23.878386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:09:23.878397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:09:23.878411 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:09:23.878423 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:09:23.878434 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 02:09:23.878446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 02:09:23.878459 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 02:09:23.878473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:09:23.878487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:09:23.878500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:09:23.878512 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 02:09:23.878524 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 02:09:23.878537 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 02:09:23.878550 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 02:09:23.878564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:09:23.878578 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 02:09:23.878594 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 02:09:23.878609 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 02:09:23.878622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 02:09:23.878634 systemd[1]: Reached target machines.target - Containers. Apr 16 02:09:23.878647 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 02:09:23.878660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:09:23.878905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:09:23.878928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 02:09:23.878940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:09:23.878958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:09:23.878971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:09:23.878983 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 02:09:23.878995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:09:23.879009 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 02:09:23.879025 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 02:09:23.879039 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 02:09:23.879051 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 02:09:23.879066 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 02:09:23.879079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:09:23.879093 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:09:23.879106 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:09:23.879119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:09:23.879132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 02:09:23.879145 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 02:09:23.879157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:09:23.879170 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 02:09:23.879184 systemd[1]: Stopped verity-setup.service. Apr 16 02:09:23.879196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:09:23.879209 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 02:09:23.879221 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 02:09:23.879234 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 02:09:23.879248 kernel: ACPI: bus type drm_connector registered Apr 16 02:09:23.879261 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 02:09:23.879273 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 02:09:23.879338 systemd-journald[1226]: Collecting audit messages is disabled. Apr 16 02:09:23.879372 kernel: loop: module loaded Apr 16 02:09:23.879384 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 02:09:23.879399 systemd-journald[1226]: Journal started Apr 16 02:09:23.879424 systemd-journald[1226]: Runtime Journal (/run/log/journal/706dcde4b6404e2fbd9d64296867ff37) is 6M, max 48.2M, 42.2M free. Apr 16 02:09:22.623024 systemd[1]: Queued start job for default target multi-user.target. Apr 16 02:09:22.671814 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 02:09:22.680515 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 02:09:22.696521 systemd[1]: systemd-journald.service: Consumed 1.192s CPU time. Apr 16 02:09:23.913047 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 02:09:23.920486 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:09:23.922619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:09:23.930321 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 02:09:23.930584 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 02:09:23.937344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:09:23.937557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:09:23.941041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:09:23.941233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:09:23.954829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:09:23.958586 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:09:23.973588 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 02:09:24.048172 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 02:09:24.054961 kernel: fuse: init (API version 7.41) Apr 16 02:09:24.055264 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:09:24.060961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:09:24.064358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 02:09:24.067042 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 02:09:24.073216 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:09:24.073460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:09:24.082932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:09:24.105473 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:09:24.112238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 02:09:24.118218 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 02:09:24.126419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 02:09:24.126585 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:09:24.130979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 02:09:24.145310 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 02:09:24.152399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:09:24.154325 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 02:09:24.163125 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 02:09:24.172771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:09:24.176571 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 02:09:24.180019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:09:24.220637 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:09:24.249553 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 02:09:24.265013 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 02:09:24.267892 systemd-journald[1226]: Time spent on flushing to /var/log/journal/706dcde4b6404e2fbd9d64296867ff37 is 20.358ms for 989 entries. Apr 16 02:09:24.267892 systemd-journald[1226]: System Journal (/var/log/journal/706dcde4b6404e2fbd9d64296867ff37) is 8M, max 195.6M, 187.6M free. Apr 16 02:09:24.345120 systemd-journald[1226]: Received client request to flush runtime journal. Apr 16 02:09:24.345198 kernel: loop0: detected capacity change from 0 to 110984 Apr 16 02:09:24.270144 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 02:09:24.273244 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 02:09:24.278332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 02:09:24.291959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 02:09:24.302126 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 02:09:24.358569 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 02:09:24.377849 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 02:09:24.383050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:09:24.462361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 02:09:24.468312 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 02:09:24.474485 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 02:09:24.503516 kernel: loop1: detected capacity change from 0 to 219192 Apr 16 02:09:24.509277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:09:24.589719 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 16 02:09:24.589740 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 16 02:09:24.607306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:09:24.634730 kernel: loop2: detected capacity change from 0 to 128560 Apr 16 02:09:24.775718 kernel: loop3: detected capacity change from 0 to 110984 Apr 16 02:09:24.845729 kernel: loop4: detected capacity change from 0 to 219192 Apr 16 02:09:24.950637 kernel: loop5: detected capacity change from 0 to 128560 Apr 16 02:09:25.050157 (sd-merge)[1284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 02:09:25.052970 (sd-merge)[1284]: Merged extensions into '/usr'. Apr 16 02:09:25.068448 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 02:09:25.068464 systemd[1]: Reloading... Apr 16 02:09:25.236737 zram_generator::config[1313]: No configuration found. Apr 16 02:09:25.875194 systemd[1]: Reloading finished in 805 ms. Apr 16 02:09:25.922118 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 02:09:25.928297 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 02:09:25.981248 systemd[1]: Starting ensure-sysext.service... Apr 16 02:09:26.046870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:09:26.058720 ldconfig[1256]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 02:09:26.077188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:09:26.111254 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 02:09:26.124600 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 02:09:26.124633 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 02:09:26.125406 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Apr 16 02:09:26.125444 systemd[1]: Reloading... Apr 16 02:09:26.125862 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 02:09:26.129336 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 02:09:26.130127 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 02:09:26.130356 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Apr 16 02:09:26.130403 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Apr 16 02:09:26.140461 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:09:26.140474 systemd-tmpfiles[1348]: Skipping /boot Apr 16 02:09:26.165301 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:09:26.166114 systemd-tmpfiles[1348]: Skipping /boot Apr 16 02:09:26.245117 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Apr 16 02:09:26.261897 zram_generator::config[1374]: No configuration found. Apr 16 02:09:26.772882 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 16 02:09:26.786873 kernel: ACPI: button: Power Button [PWRF] Apr 16 02:09:26.939718 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 02:09:27.003637 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 02:09:27.004568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:09:27.010417 systemd[1]: Reloading finished in 884 ms. Apr 16 02:09:27.049362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:09:27.058185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:09:27.220850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:09:27.227016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:09:27.246238 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 02:09:27.253114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:09:27.435040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:09:27.443241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:09:27.464602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:09:27.482540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:09:27.527242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:09:27.531362 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 02:09:27.540858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:09:27.567511 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 02:09:27.583741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:09:27.627475 augenrules[1496]: No rules Apr 16 02:09:27.644100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:09:27.672090 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 02:09:27.736132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:09:27.741892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:09:27.747845 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:09:27.758198 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:09:27.768332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:09:27.768510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:09:27.773459 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:09:27.774531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:09:27.779188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:09:27.779356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:09:27.783377 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:09:27.783572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:09:27.792315 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 02:09:27.800751 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 02:09:27.806557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 02:09:27.822091 systemd[1]: Finished ensure-sysext.service. Apr 16 02:09:27.838313 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 02:09:27.839360 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 02:09:27.875874 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 02:09:27.919491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:09:27.919870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:09:27.934073 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 02:09:27.938320 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 02:09:27.964331 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 02:09:27.964470 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 02:09:28.003966 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 02:09:28.232612 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 02:09:28.236032 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:09:28.473747 systemd-networkd[1492]: lo: Link UP Apr 16 02:09:28.473756 systemd-networkd[1492]: lo: Gained carrier Apr 16 02:09:28.475017 systemd-networkd[1492]: Enumeration completed Apr 16 02:09:28.476362 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:09:28.477581 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:09:28.477749 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:09:28.479636 systemd-networkd[1492]: eth0: Link UP Apr 16 02:09:28.479942 systemd-networkd[1492]: eth0: Gained carrier Apr 16 02:09:28.480022 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:09:28.536030 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 02:09:28.544081 systemd-resolved[1501]: Positive Trust Anchors: Apr 16 02:09:28.544519 systemd-resolved[1501]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:09:28.544606 systemd-resolved[1501]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:09:28.552060 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 02:09:28.552442 systemd-resolved[1501]: Defaulting to hostname 'linux'. Apr 16 02:09:28.553437 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:09:28.554638 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Apr 16 02:09:28.556473 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 02:09:28.562444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:09:29.226262 systemd-resolved[1501]: Clock change detected. Flushing caches. Apr 16 02:09:29.226406 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 02:09:29.226456 systemd-timesyncd[1516]: Initial clock synchronization to Thu 2026-04-16 02:09:29.226195 UTC. Apr 16 02:09:29.226850 systemd[1]: Reached target network.target - Network. Apr 16 02:09:29.229437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:09:29.233198 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:09:29.236082 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 02:09:29.240053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 02:09:29.270311 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 02:09:29.278440 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 02:09:29.288436 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 02:09:29.288851 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:09:29.294441 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 02:09:29.306333 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 02:09:29.309897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 02:09:29.319107 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:09:29.328056 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 02:09:29.341261 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 02:09:29.419076 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 02:09:29.426002 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 02:09:29.439975 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 02:09:29.456658 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 02:09:29.460647 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 02:09:29.468581 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 02:09:29.486870 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 02:09:29.496343 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:09:29.499027 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:09:29.507232 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:09:29.507289 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:09:29.514049 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 02:09:29.521801 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 02:09:29.552081 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 02:09:29.585092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 02:09:29.614851 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 02:09:29.619966 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 02:09:29.622804 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 02:09:29.672039 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 02:09:29.679381 jq[1541]: false Apr 16 02:09:29.707371 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing passwd entry cache Apr 16 02:09:29.692933 oslogin_cache_refresh[1543]: Refreshing passwd entry cache Apr 16 02:09:29.708049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 02:09:29.722980 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting users, quitting Apr 16 02:09:29.722980 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:09:29.722980 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing group entry cache Apr 16 02:09:29.721488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 02:09:29.720164 oslogin_cache_refresh[1543]: Failure getting users, quitting Apr 16 02:09:29.720229 oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:09:29.720316 oslogin_cache_refresh[1543]: Refreshing group entry cache Apr 16 02:09:29.737782 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting groups, quitting Apr 16 02:09:29.737782 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:09:29.737488 oslogin_cache_refresh[1543]: Failure getting groups, quitting Apr 16 02:09:29.737535 oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:09:29.743708 extend-filesystems[1542]: Found /dev/vda6 Apr 16 02:09:29.767273 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 02:09:29.775778 extend-filesystems[1542]: Found /dev/vda9 Apr 16 02:09:29.783720 extend-filesystems[1542]: Checking size of /dev/vda9 Apr 16 02:09:29.789989 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 02:09:29.795510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 02:09:29.800053 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 02:09:29.803309 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 02:09:29.810211 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 02:09:29.817311 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 02:09:29.821526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 02:09:29.823133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 02:09:29.828875 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 02:09:29.829108 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 02:09:29.834979 jq[1564]: true Apr 16 02:09:29.836508 extend-filesystems[1542]: Resized partition /dev/vda9 Apr 16 02:09:29.837196 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 02:09:29.838665 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 02:09:29.900151 update_engine[1561]: I20260416 02:09:29.890127 1561 main.cc:92] Flatcar Update Engine starting Apr 16 02:09:29.892232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 02:09:29.892721 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 02:09:29.908490 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 02:09:29.920596 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 02:09:29.968470 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 02:09:29.994191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 02:09:30.003160 jq[1571]: true Apr 16 02:09:30.043355 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 02:09:30.072918 tar[1570]: linux-amd64/LICENSE Apr 16 02:09:30.121622 tar[1570]: linux-amd64/helm Apr 16 02:09:30.126287 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 02:09:30.126287 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 02:09:30.126287 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 02:09:30.173143 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Apr 16 02:09:30.128607 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 02:09:30.128918 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 02:09:30.130721 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 02:09:30.130738 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 02:09:30.133517 systemd-logind[1559]: New seat seat0. Apr 16 02:09:30.142871 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 02:09:30.212315 dbus-daemon[1539]: [system] SELinux support is enabled Apr 16 02:09:30.212905 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 02:09:30.227101 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Apr 16 02:09:30.230973 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 02:09:30.236700 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 02:09:30.236991 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 02:09:30.239999 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 02:09:30.281697 update_engine[1561]: I20260416 02:09:30.241432 1561 update_check_scheduler.cc:74] Next update check in 6m58s Apr 16 02:09:30.281869 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 02:09:30.283759 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 02:09:30.286961 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 02:09:30.287277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 02:09:30.297301 systemd[1]: Started update-engine.service - Update Engine. Apr 16 02:09:30.320936 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 02:09:30.360896 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 02:09:30.367519 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 02:09:30.376380 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:46974.service - OpenSSH per-connection server daemon (10.0.0.1:46974). Apr 16 02:09:30.484528 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 02:09:30.485957 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 02:09:30.519361 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 02:09:30.532039 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 02:09:30.576196 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 02:09:30.597987 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 02:09:30.618022 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 02:09:30.629324 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 02:09:30.735057 systemd-networkd[1492]: eth0: Gained IPv6LL Apr 16 02:09:30.742467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 02:09:30.758285 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 46974 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:30.762071 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 02:09:30.766437 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:30.774044 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 02:09:30.783303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:09:30.801107 containerd[1572]: time="2026-04-16T02:09:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 02:09:30.805636 containerd[1572]: time="2026-04-16T02:09:30.805430344Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 02:09:30.815234 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 02:09:30.842634 containerd[1572]: time="2026-04-16T02:09:30.842409008Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.649µs" Apr 16 02:09:30.842634 containerd[1572]: time="2026-04-16T02:09:30.842481307Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 02:09:30.842634 containerd[1572]: time="2026-04-16T02:09:30.842502251Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.842837919Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.842878826Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.842912449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.842974049Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.842984233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.843243463Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.843259484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.843272230Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.843280979Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 02:09:30.843746 containerd[1572]: time="2026-04-16T02:09:30.843353340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 02:09:30.844109 containerd[1572]: time="2026-04-16T02:09:30.843792372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:09:30.844109 containerd[1572]: time="2026-04-16T02:09:30.843849891Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:09:30.844109 containerd[1572]: time="2026-04-16T02:09:30.843863070Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 02:09:30.844109 containerd[1572]: time="2026-04-16T02:09:30.843916080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 02:09:30.844609 containerd[1572]: time="2026-04-16T02:09:30.844229234Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 02:09:30.844609 containerd[1572]: time="2026-04-16T02:09:30.844315815Z" level=info msg="metadata content store policy set" policy=shared Apr 16 02:09:30.893832 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 02:09:30.902095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 02:09:30.921658 containerd[1572]: time="2026-04-16T02:09:30.921424166Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.921890894Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.921987303Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922007777Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922023714Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922036733Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922050110Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922113933Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922140926Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922156098Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922170994Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922185970Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922377892Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922409556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 02:09:30.922601 containerd[1572]: time="2026-04-16T02:09:30.922427231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922439624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922458969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922471663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922484025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922495499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922507290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922519908Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 02:09:30.923001 containerd[1572]: time="2026-04-16T02:09:30.922535913Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 02:09:30.926788 containerd[1572]: time="2026-04-16T02:09:30.925867258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 02:09:30.926788 containerd[1572]: time="2026-04-16T02:09:30.926440290Z" level=info msg="Start snapshots syncer" Apr 16 02:09:30.927165 containerd[1572]: time="2026-04-16T02:09:30.927120711Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 02:09:30.929987 containerd[1572]: time="2026-04-16T02:09:30.929085380Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 02:09:30.930417 containerd[1572]: time="2026-04-16T02:09:30.930394143Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 02:09:30.933172 containerd[1572]: time="2026-04-16T02:09:30.930767601Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 02:09:30.933465 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 02:09:30.933840 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936179407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936277691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936292828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936304966Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936320709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936332270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936343988Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936382915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936396534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936409455Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936459842Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936477542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:09:30.938002 containerd[1572]: time="2026-04-16T02:09:30.936487367Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:09:30.935874 systemd-logind[1559]: New session 1 of user core. Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936502223Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936510675Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936520715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936538822Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936788904Z" level=info msg="runtime interface created" Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936803479Z" level=info msg="created NRI interface" Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936824013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936853345Z" level=info msg="Connect containerd service" Apr 16 02:09:30.938424 containerd[1572]: time="2026-04-16T02:09:30.936888148Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 02:09:30.942628 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 02:09:30.956635 containerd[1572]: time="2026-04-16T02:09:30.942542886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:09:30.958350 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 02:09:30.966745 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 02:09:30.980894 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 02:09:31.017301 tar[1570]: linux-amd64/README.md Apr 16 02:09:31.025621 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 02:09:31.038506 systemd-logind[1559]: New session c1 of user core. Apr 16 02:09:31.097373 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 02:09:31.299514 containerd[1572]: time="2026-04-16T02:09:31.299348604Z" level=info msg="Start subscribing containerd event" Apr 16 02:09:31.300110 containerd[1572]: time="2026-04-16T02:09:31.300002684Z" level=info msg="Start recovering state" Apr 16 02:09:31.300384 containerd[1572]: time="2026-04-16T02:09:31.300372061Z" level=info msg="Start event monitor" Apr 16 02:09:31.300469 containerd[1572]: time="2026-04-16T02:09:31.300460443Z" level=info msg="Start cni network conf syncer for default" Apr 16 02:09:31.300579 containerd[1572]: time="2026-04-16T02:09:31.300571241Z" level=info msg="Start streaming server" Apr 16 02:09:31.300648 containerd[1572]: time="2026-04-16T02:09:31.299528132Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 02:09:31.301873 containerd[1572]: time="2026-04-16T02:09:31.301834548Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 02:09:31.303002 containerd[1572]: time="2026-04-16T02:09:31.300719786Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 02:09:31.311385 containerd[1572]: time="2026-04-16T02:09:31.311109051Z" level=info msg="runtime interface starting up..." Apr 16 02:09:31.311385 containerd[1572]: time="2026-04-16T02:09:31.311367011Z" level=info msg="starting plugins..." Apr 16 02:09:31.311603 containerd[1572]: time="2026-04-16T02:09:31.311456925Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 02:09:31.313144 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 02:09:31.316096 containerd[1572]: time="2026-04-16T02:09:31.313379635Z" level=info msg="containerd successfully booted in 0.514009s" Apr 16 02:09:31.522086 systemd[1661]: Queued start job for default target default.target. Apr 16 02:09:31.538321 systemd[1661]: Created slice app.slice - User Application Slice. Apr 16 02:09:31.538377 systemd[1661]: Reached target paths.target - Paths. Apr 16 02:09:31.538424 systemd[1661]: Reached target timers.target - Timers. Apr 16 02:09:31.543404 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 02:09:31.639225 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 02:09:31.682200 systemd[1661]: Reached target sockets.target - Sockets. Apr 16 02:09:31.682525 systemd[1661]: Reached target basic.target - Basic System. Apr 16 02:09:31.682622 systemd[1661]: Reached target default.target - Main User Target. Apr 16 02:09:31.682652 systemd[1661]: Startup finished in 580ms. Apr 16 02:09:31.682747 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 02:09:31.703151 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 02:09:31.804442 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:46988.service - OpenSSH per-connection server daemon (10.0.0.1:46988). Apr 16 02:09:32.034800 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 46988 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:32.039314 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:32.088962 systemd-logind[1559]: New session 2 of user core. Apr 16 02:09:32.100005 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 02:09:32.177314 sshd[1690]: Connection closed by 10.0.0.1 port 46988 Apr 16 02:09:32.186052 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:32.205193 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:46988.service: Deactivated successfully. Apr 16 02:09:32.215540 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 02:09:32.234210 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Apr 16 02:09:32.284376 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:46996.service - OpenSSH per-connection server daemon (10.0.0.1:46996). Apr 16 02:09:32.301025 systemd-logind[1559]: Removed session 2. Apr 16 02:09:32.529817 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 46996 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:32.532243 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:32.570148 systemd-logind[1559]: New session 3 of user core. Apr 16 02:09:32.587121 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 02:09:32.750514 sshd[1699]: Connection closed by 10.0.0.1 port 46996 Apr 16 02:09:32.752966 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:32.768898 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:46996.service: Deactivated successfully. Apr 16 02:09:32.778189 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 02:09:32.780933 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Apr 16 02:09:32.787753 systemd-logind[1559]: Removed session 3. Apr 16 02:09:34.207663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:09:34.215136 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 02:09:34.221945 systemd[1]: Startup finished in 6.052s (kernel) + 15.681s (initrd) + 13.124s (userspace) = 34.858s. Apr 16 02:09:34.231597 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:09:36.639632 kubelet[1709]: E0416 02:09:36.639370 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:09:36.647029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:09:36.647438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:09:36.650614 systemd[1]: kubelet.service: Consumed 1.688s CPU time, 257.9M memory peak. Apr 16 02:09:42.821999 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:51258.service - OpenSSH per-connection server daemon (10.0.0.1:51258). Apr 16 02:09:43.032877 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 51258 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:43.036875 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:43.067928 systemd-logind[1559]: New session 4 of user core. Apr 16 02:09:43.082362 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 02:09:43.219027 sshd[1726]: Connection closed by 10.0.0.1 port 51258 Apr 16 02:09:43.218337 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:43.232228 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:51258.service: Deactivated successfully. Apr 16 02:09:43.237910 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 02:09:43.242703 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Apr 16 02:09:43.249770 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:51270.service - OpenSSH per-connection server daemon (10.0.0.1:51270). Apr 16 02:09:43.258004 systemd-logind[1559]: Removed session 4. Apr 16 02:09:43.446673 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 51270 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:43.451473 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:43.484811 systemd-logind[1559]: New session 5 of user core. Apr 16 02:09:43.499602 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 02:09:43.537999 sshd[1735]: Connection closed by 10.0.0.1 port 51270 Apr 16 02:09:43.539088 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:43.629148 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:51270.service: Deactivated successfully. Apr 16 02:09:43.636444 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 02:09:43.648337 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Apr 16 02:09:43.654103 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:51284.service - OpenSSH per-connection server daemon (10.0.0.1:51284). Apr 16 02:09:43.660175 systemd-logind[1559]: Removed session 5. Apr 16 02:09:43.888779 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 51284 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:43.893825 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:43.984074 systemd-logind[1559]: New session 6 of user core. Apr 16 02:09:43.993464 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 02:09:44.070541 sshd[1744]: Connection closed by 10.0.0.1 port 51284 Apr 16 02:09:44.070814 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:44.117962 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:51284.service: Deactivated successfully. Apr 16 02:09:44.175505 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 02:09:44.192521 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Apr 16 02:09:44.196177 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:51292.service - OpenSSH per-connection server daemon (10.0.0.1:51292). Apr 16 02:09:44.200206 systemd-logind[1559]: Removed session 6. Apr 16 02:09:44.394776 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 51292 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:44.403250 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:44.462776 systemd-logind[1559]: New session 7 of user core. Apr 16 02:09:44.485316 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 02:09:44.612417 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 02:09:44.613068 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:09:44.643766 sudo[1754]: pam_unix(sudo:session): session closed for user root Apr 16 02:09:44.665867 sshd[1753]: Connection closed by 10.0.0.1 port 51292 Apr 16 02:09:44.664938 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:44.721877 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:51292.service: Deactivated successfully. Apr 16 02:09:44.732356 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 02:09:44.761614 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Apr 16 02:09:44.780390 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:51302.service - OpenSSH per-connection server daemon (10.0.0.1:51302). Apr 16 02:09:44.781261 systemd-logind[1559]: Removed session 7. Apr 16 02:09:45.020022 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:45.025971 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:45.042949 systemd-logind[1559]: New session 8 of user core. Apr 16 02:09:45.063016 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 02:09:45.124491 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 02:09:45.124968 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:09:45.140387 sudo[1765]: pam_unix(sudo:session): session closed for user root Apr 16 02:09:45.185406 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 02:09:45.186196 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:09:45.233029 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:09:45.436257 augenrules[1787]: No rules Apr 16 02:09:45.444271 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:09:45.445118 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:09:45.454613 sudo[1764]: pam_unix(sudo:session): session closed for user root Apr 16 02:09:45.466013 sshd[1763]: Connection closed by 10.0.0.1 port 51302 Apr 16 02:09:45.469479 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:45.501075 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:51302.service: Deactivated successfully. Apr 16 02:09:45.516177 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 02:09:45.522996 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Apr 16 02:09:45.580852 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:35284.service - OpenSSH per-connection server daemon (10.0.0.1:35284). Apr 16 02:09:45.582283 systemd-logind[1559]: Removed session 8. Apr 16 02:09:45.791570 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 35284 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:09:45.800334 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:45.833700 systemd-logind[1559]: New session 9 of user core. Apr 16 02:09:45.876260 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 02:09:45.925329 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 02:09:45.927149 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:09:46.733180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 02:09:46.780163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:09:47.440994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:09:47.459038 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:09:47.464535 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 02:09:47.486528 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 02:09:47.686391 kubelet[1828]: E0416 02:09:47.685686 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:09:47.709409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:09:47.709891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:09:47.713683 systemd[1]: kubelet.service: Consumed 364ms CPU time, 109.2M memory peak. Apr 16 02:09:48.615772 dockerd[1830]: time="2026-04-16T02:09:48.615490537Z" level=info msg="Starting up" Apr 16 02:09:48.620630 dockerd[1830]: time="2026-04-16T02:09:48.620432760Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 02:09:48.712057 dockerd[1830]: time="2026-04-16T02:09:48.711868673Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 02:09:49.089702 dockerd[1830]: time="2026-04-16T02:09:49.088235472Z" level=info msg="Loading containers: start." Apr 16 02:09:49.168036 kernel: Initializing XFRM netlink socket Apr 16 02:09:51.105541 systemd-networkd[1492]: docker0: Link UP Apr 16 02:09:51.137031 dockerd[1830]: time="2026-04-16T02:09:51.136842775Z" level=info msg="Loading containers: done." Apr 16 02:09:51.266592 dockerd[1830]: time="2026-04-16T02:09:51.266426240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 02:09:51.266879 dockerd[1830]: time="2026-04-16T02:09:51.266652557Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 02:09:51.266879 dockerd[1830]: time="2026-04-16T02:09:51.266809525Z" level=info msg="Initializing buildkit" Apr 16 02:09:51.539088 dockerd[1830]: time="2026-04-16T02:09:51.534329433Z" level=info msg="Completed buildkit initialization" Apr 16 02:09:51.624998 dockerd[1830]: time="2026-04-16T02:09:51.617609377Z" level=info msg="Daemon has completed initialization" Apr 16 02:09:51.624998 dockerd[1830]: time="2026-04-16T02:09:51.623051822Z" level=info msg="API listen on /run/docker.sock" Apr 16 02:09:51.619405 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 02:09:53.847729 containerd[1572]: time="2026-04-16T02:09:53.847629521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 02:09:55.163166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180060699.mount: Deactivated successfully. Apr 16 02:09:57.732898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 02:09:57.739410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:09:58.081684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:09:58.114468 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:09:58.351109 kubelet[2122]: E0416 02:09:58.350653 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:09:58.355054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:09:58.355222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:09:58.355651 systemd[1]: kubelet.service: Consumed 321ms CPU time, 109.4M memory peak. Apr 16 02:10:01.271119 containerd[1572]: time="2026-04-16T02:10:01.270846985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:01.272288 containerd[1572]: time="2026-04-16T02:10:01.272200713Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 02:10:01.278087 containerd[1572]: time="2026-04-16T02:10:01.277752709Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:01.295893 containerd[1572]: time="2026-04-16T02:10:01.295111899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:01.298291 containerd[1572]: time="2026-04-16T02:10:01.298132546Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 7.450432606s" Apr 16 02:10:01.298291 containerd[1572]: time="2026-04-16T02:10:01.298201877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 02:10:01.316220 containerd[1572]: time="2026-04-16T02:10:01.315641762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 02:10:06.023477 containerd[1572]: time="2026-04-16T02:10:06.021172240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:06.071120 containerd[1572]: time="2026-04-16T02:10:06.039722897Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 02:10:06.076664 containerd[1572]: time="2026-04-16T02:10:06.074063362Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:06.086908 containerd[1572]: time="2026-04-16T02:10:06.083866019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:06.086908 containerd[1572]: time="2026-04-16T02:10:06.084767263Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 4.769010059s" Apr 16 02:10:06.086908 containerd[1572]: time="2026-04-16T02:10:06.084803424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 02:10:06.086908 containerd[1572]: time="2026-04-16T02:10:06.086166724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 02:10:08.485057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 02:10:08.491840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:08.932265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:08.986290 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:10:09.131948 kubelet[2147]: E0416 02:10:09.131814 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:10:09.136809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:10:09.136964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:10:09.137599 systemd[1]: kubelet.service: Consumed 298ms CPU time, 111.7M memory peak. Apr 16 02:10:09.700482 containerd[1572]: time="2026-04-16T02:10:09.700063149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:09.707819 containerd[1572]: time="2026-04-16T02:10:09.705795975Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 02:10:09.711660 containerd[1572]: time="2026-04-16T02:10:09.711426345Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:09.726640 containerd[1572]: time="2026-04-16T02:10:09.724355098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:09.735612 containerd[1572]: time="2026-04-16T02:10:09.734752078Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 3.648507311s" Apr 16 02:10:09.737452 containerd[1572]: time="2026-04-16T02:10:09.736449441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 02:10:09.740014 containerd[1572]: time="2026-04-16T02:10:09.739943041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 02:10:12.735254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101483077.mount: Deactivated successfully. Apr 16 02:10:15.396153 update_engine[1561]: I20260416 02:10:15.394191 1561 update_attempter.cc:509] Updating boot flags... Apr 16 02:10:15.800118 containerd[1572]: time="2026-04-16T02:10:15.798530492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:15.809075 containerd[1572]: time="2026-04-16T02:10:15.808960228Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 02:10:15.862489 containerd[1572]: time="2026-04-16T02:10:15.862363492Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:15.875286 containerd[1572]: time="2026-04-16T02:10:15.875175410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:15.877196 containerd[1572]: time="2026-04-16T02:10:15.877049193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 6.137043237s" Apr 16 02:10:15.877196 containerd[1572]: time="2026-04-16T02:10:15.877140648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 02:10:15.879200 containerd[1572]: time="2026-04-16T02:10:15.879133404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 02:10:16.822193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350837.mount: Deactivated successfully. Apr 16 02:10:19.259124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 02:10:19.276100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:19.858080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:19.897635 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:10:20.318610 kubelet[2238]: E0416 02:10:20.318439 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:10:20.351868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:10:20.352055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:10:20.352515 systemd[1]: kubelet.service: Consumed 553ms CPU time, 109.8M memory peak. Apr 16 02:10:23.335704 containerd[1572]: time="2026-04-16T02:10:23.334415616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:23.337485 containerd[1572]: time="2026-04-16T02:10:23.336932921Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 02:10:23.341706 containerd[1572]: time="2026-04-16T02:10:23.339522788Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:23.354743 containerd[1572]: time="2026-04-16T02:10:23.353661829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:23.355805 containerd[1572]: time="2026-04-16T02:10:23.355716420Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.476521235s" Apr 16 02:10:23.355805 containerd[1572]: time="2026-04-16T02:10:23.355769828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 02:10:23.357638 containerd[1572]: time="2026-04-16T02:10:23.357108687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 02:10:24.520356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304320705.mount: Deactivated successfully. Apr 16 02:10:24.622051 containerd[1572]: time="2026-04-16T02:10:24.621896548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:24.626775 containerd[1572]: time="2026-04-16T02:10:24.626567979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 02:10:24.639201 containerd[1572]: time="2026-04-16T02:10:24.639104076Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:24.658048 containerd[1572]: time="2026-04-16T02:10:24.657778421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:24.660711 containerd[1572]: time="2026-04-16T02:10:24.660490192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.303341081s" Apr 16 02:10:24.661042 containerd[1572]: time="2026-04-16T02:10:24.660742637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 02:10:24.663729 containerd[1572]: time="2026-04-16T02:10:24.661669064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 02:10:26.068038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605567592.mount: Deactivated successfully. Apr 16 02:10:30.511755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 02:10:30.527850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:31.279971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:31.330241 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:10:31.628363 kubelet[2312]: E0416 02:10:31.628142 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:10:31.638114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:10:31.638349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:10:31.639371 systemd[1]: kubelet.service: Consumed 516ms CPU time, 110M memory peak. Apr 16 02:10:34.308093 containerd[1572]: time="2026-04-16T02:10:34.307964623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:34.314356 containerd[1572]: time="2026-04-16T02:10:34.314283020Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 02:10:34.331419 containerd[1572]: time="2026-04-16T02:10:34.329214273Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:34.395532 containerd[1572]: time="2026-04-16T02:10:34.395255605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:10:34.400111 containerd[1572]: time="2026-04-16T02:10:34.399945395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 9.73549384s" Apr 16 02:10:34.400111 containerd[1572]: time="2026-04-16T02:10:34.400096337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 02:10:41.737483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 02:10:41.778349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:42.434133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:42.489492 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:10:42.812525 kubelet[2362]: E0416 02:10:42.811670 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:10:42.821986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:10:42.823526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:10:42.827230 systemd[1]: kubelet.service: Consumed 531ms CPU time, 112.3M memory peak. Apr 16 02:10:47.533488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:47.535045 systemd[1]: kubelet.service: Consumed 531ms CPU time, 112.3M memory peak. Apr 16 02:10:47.593778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:47.730000 systemd[1]: Reload requested from client PID 2379 ('systemctl') (unit session-9.scope)... Apr 16 02:10:47.730891 systemd[1]: Reloading... Apr 16 02:10:48.283273 zram_generator::config[2425]: No configuration found. Apr 16 02:10:50.148912 systemd[1]: Reloading finished in 2415 ms. Apr 16 02:10:50.381912 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 02:10:50.383259 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 02:10:50.384415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:50.384534 systemd[1]: kubelet.service: Consumed 233ms CPU time, 98.4M memory peak. Apr 16 02:10:50.395419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:10:51.292008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:10:51.324599 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:10:52.174351 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:10:52.174351 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:10:52.174351 kubelet[2470]: I0416 02:10:52.172706 2470 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:10:53.797803 kubelet[2470]: I0416 02:10:53.796438 2470 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:10:53.797803 kubelet[2470]: I0416 02:10:53.796488 2470 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:10:53.797803 kubelet[2470]: I0416 02:10:53.796529 2470 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:10:53.797803 kubelet[2470]: I0416 02:10:53.796542 2470 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:10:53.797803 kubelet[2470]: I0416 02:10:53.796949 2470 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:10:53.985464 kubelet[2470]: E0416 02:10:53.984041 2470 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:10:54.017987 kubelet[2470]: I0416 02:10:54.016622 2470 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:10:54.137203 kubelet[2470]: I0416 02:10:54.135885 2470 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:10:54.215267 kubelet[2470]: I0416 02:10:54.214412 2470 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:10:54.224229 kubelet[2470]: I0416 02:10:54.221953 2470 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:10:54.228075 kubelet[2470]: I0416 02:10:54.225713 2470 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:10:54.230255 kubelet[2470]: I0416 02:10:54.229457 2470 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:10:54.232361 kubelet[2470]: I0416 02:10:54.231201 2470 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:10:54.236751 kubelet[2470]: I0416 02:10:54.236349 2470 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:10:54.253637 kubelet[2470]: I0416 02:10:54.253416 2470 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:10:54.254623 kubelet[2470]: I0416 02:10:54.254597 2470 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:10:54.254623 kubelet[2470]: I0416 02:10:54.254623 2470 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:10:54.254771 kubelet[2470]: I0416 02:10:54.254652 2470 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:10:54.254771 kubelet[2470]: I0416 02:10:54.254663 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:10:54.286835 kubelet[2470]: E0416 02:10:54.286726 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:10:54.290698 kubelet[2470]: E0416 02:10:54.289312 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:10:54.317509 kubelet[2470]: I0416 02:10:54.315769 2470 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:10:54.338361 kubelet[2470]: I0416 02:10:54.335287 2470 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:10:54.338361 kubelet[2470]: I0416 02:10:54.335339 2470 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:10:54.338361 kubelet[2470]: W0416 02:10:54.335835 2470 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 02:10:54.402610 kubelet[2470]: I0416 02:10:54.402345 2470 server.go:1262] "Started kubelet" Apr 16 02:10:54.403601 kubelet[2470]: I0416 02:10:54.403418 2470 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:10:54.404591 kubelet[2470]: I0416 02:10:54.404361 2470 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:10:54.411626 kubelet[2470]: I0416 02:10:54.410629 2470 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:10:54.412742 kubelet[2470]: I0416 02:10:54.410539 2470 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:10:54.414210 kubelet[2470]: I0416 02:10:54.414095 2470 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:10:54.415982 kubelet[2470]: I0416 02:10:54.415219 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:10:54.425366 kubelet[2470]: E0416 02:10:54.425155 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:10:54.425902 kubelet[2470]: I0416 02:10:54.425810 2470 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:10:54.426244 kubelet[2470]: I0416 02:10:54.426027 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:10:54.428097 kubelet[2470]: I0416 02:10:54.428045 2470 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:10:54.428253 kubelet[2470]: I0416 02:10:54.428160 2470 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:10:54.428926 kubelet[2470]: E0416 02:10:54.426953 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:10:54.430031 kubelet[2470]: E0416 02:10:54.430001 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Apr 16 02:10:54.430342 kubelet[2470]: E0416 02:10:54.430113 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:10:54.433049 kubelet[2470]: I0416 02:10:54.433019 2470 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:10:54.434096 kubelet[2470]: E0416 02:10:54.433052 2470 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:10:54.436287 kubelet[2470]: I0416 02:10:54.434255 2470 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:10:54.520887 kubelet[2470]: I0416 02:10:54.520014 2470 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:10:54.534530 kubelet[2470]: E0416 02:10:54.534325 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:10:54.628900 kubelet[2470]: I0416 02:10:54.628373 2470 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:10:54.628900 kubelet[2470]: I0416 02:10:54.628537 2470 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:10:54.628900 kubelet[2470]: I0416 02:10:54.628580 2470 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:10:54.628900 kubelet[2470]: I0416 02:10:54.628671 2470 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:10:54.632113 kubelet[2470]: E0416 02:10:54.632046 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Apr 16 02:10:54.633612 kubelet[2470]: I0416 02:10:54.633353 2470 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:10:54.633955 kubelet[2470]: I0416 02:10:54.633911 2470 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:10:54.634010 kubelet[2470]: I0416 02:10:54.633971 2470 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:10:54.634076 kubelet[2470]: E0416 02:10:54.634029 2470 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:10:54.634076 kubelet[2470]: I0416 02:10:54.634066 2470 policy_none.go:49] "None policy: Start" Apr 16 02:10:54.634151 kubelet[2470]: I0416 02:10:54.634087 2470 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:10:54.634151 kubelet[2470]: I0416 02:10:54.634098 2470 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:10:54.642876 kubelet[2470]: E0416 02:10:54.637100 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:10:54.644014 kubelet[2470]: I0416 02:10:54.643974 2470 policy_none.go:47] "Start" Apr 16 02:10:54.645264 kubelet[2470]: E0416 02:10:54.645171 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:10:54.760122 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 02:10:54.761747 kubelet[2470]: E0416 02:10:54.761605 2470 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:10:54.761864 kubelet[2470]: E0416 02:10:54.761789 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:10:54.839472 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 02:10:54.905612 kubelet[2470]: E0416 02:10:54.905059 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:10:54.946158 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 02:10:54.962728 kubelet[2470]: E0416 02:10:54.962511 2470 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:10:54.973622 kubelet[2470]: E0416 02:10:54.973217 2470 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:10:54.973622 kubelet[2470]: I0416 02:10:54.973537 2470 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:10:54.973622 kubelet[2470]: I0416 02:10:54.973593 2470 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:10:54.974923 kubelet[2470]: I0416 02:10:54.974191 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:10:54.986378 kubelet[2470]: E0416 02:10:54.986311 2470 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:10:54.986378 kubelet[2470]: E0416 02:10:54.986392 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:10:55.037789 kubelet[2470]: E0416 02:10:55.037018 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Apr 16 02:10:55.111052 kubelet[2470]: I0416 02:10:55.110947 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:10:55.112519 kubelet[2470]: E0416 02:10:55.112300 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Apr 16 02:10:55.334317 kubelet[2470]: I0416 02:10:55.333623 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:10:55.351810 kubelet[2470]: E0416 02:10:55.348506 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Apr 16 02:10:55.401937 kubelet[2470]: E0416 02:10:55.400814 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:10:55.470477 kubelet[2470]: I0416 02:10:55.469061 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:10:55.470477 kubelet[2470]: I0416 02:10:55.469510 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:10:55.470477 kubelet[2470]: I0416 02:10:55.469536 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:10:55.470477 kubelet[2470]: I0416 02:10:55.469586 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:10:55.470477 kubelet[2470]: I0416 02:10:55.469665 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:10:55.471155 kubelet[2470]: I0416 02:10:55.469684 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:10:55.471155 kubelet[2470]: I0416 02:10:55.469702 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:10:55.471155 kubelet[2470]: I0416 02:10:55.469723 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:10:55.521222 systemd[1]: Created slice kubepods-burstable-pod014f1e99632b67d73d9d9321e27acea7.slice - libcontainer container kubepods-burstable-pod014f1e99632b67d73d9d9321e27acea7.slice. Apr 16 02:10:55.572489 kubelet[2470]: I0416 02:10:55.571998 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:10:55.587015 kubelet[2470]: E0416 02:10:55.586716 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:10:55.606348 kubelet[2470]: E0416 02:10:55.606232 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:55.610875 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 16 02:10:55.614097 containerd[1572]: time="2026-04-16T02:10:55.614002008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:014f1e99632b67d73d9d9321e27acea7,Namespace:kube-system,Attempt:0,}" Apr 16 02:10:55.642077 kubelet[2470]: E0416 02:10:55.639733 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:10:55.642077 kubelet[2470]: E0416 02:10:55.640277 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:10:55.669958 kubelet[2470]: E0416 02:10:55.668942 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:10:55.683356 kubelet[2470]: E0416 02:10:55.681043 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:55.685656 containerd[1572]: time="2026-04-16T02:10:55.684839581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 16 02:10:55.706687 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 16 02:10:55.784770 kubelet[2470]: I0416 02:10:55.783387 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:10:55.784770 kubelet[2470]: E0416 02:10:55.783847 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Apr 16 02:10:55.791320 kubelet[2470]: E0416 02:10:55.788350 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:10:55.803293 kubelet[2470]: E0416 02:10:55.802438 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:55.830381 containerd[1572]: time="2026-04-16T02:10:55.824047458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 16 02:10:55.881475 kubelet[2470]: E0416 02:10:55.878489 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Apr 16 02:10:56.031850 kubelet[2470]: E0416 02:10:56.031742 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:10:56.261026 kubelet[2470]: E0416 02:10:56.260928 2470 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:10:56.609509 kubelet[2470]: I0416 02:10:56.608351 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:10:56.617429 kubelet[2470]: E0416 02:10:56.617289 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Apr 16 02:10:56.924650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846662415.mount: Deactivated successfully. Apr 16 02:10:56.977848 containerd[1572]: time="2026-04-16T02:10:56.977738538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:10:56.990594 containerd[1572]: time="2026-04-16T02:10:56.989990200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 02:10:57.019625 containerd[1572]: time="2026-04-16T02:10:57.018260220Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:10:57.043464 containerd[1572]: time="2026-04-16T02:10:57.043292059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:10:57.074899 containerd[1572]: time="2026-04-16T02:10:57.074168899Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:10:57.079841 containerd[1572]: time="2026-04-16T02:10:57.079723076Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:10:57.089051 containerd[1572]: time="2026-04-16T02:10:57.088974454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:10:57.098689 containerd[1572]: time="2026-04-16T02:10:57.094895148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:10:57.098689 containerd[1572]: time="2026-04-16T02:10:57.096892159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.40472434s" Apr 16 02:10:57.100332 kubelet[2470]: E0416 02:10:57.099400 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:10:57.118294 containerd[1572]: time="2026-04-16T02:10:57.117883809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.279846693s" Apr 16 02:10:57.118944 containerd[1572]: time="2026-04-16T02:10:57.118900023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.456050984s" Apr 16 02:10:57.275724 kubelet[2470]: E0416 02:10:57.275080 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:10:57.324665 containerd[1572]: time="2026-04-16T02:10:57.319964898Z" level=info msg="connecting to shim 8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:10:57.422234 kubelet[2470]: E0416 02:10:57.422079 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:10:57.423908 kubelet[2470]: E0416 02:10:57.423359 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:10:57.458938 containerd[1572]: time="2026-04-16T02:10:57.458768342Z" level=info msg="connecting to shim 4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52" address="unix:///run/containerd/s/e92997ef70a1d992502ca7a8c61448f6ce80dabdb6c1b1529f9c6e83e2565d3e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:10:57.468213 containerd[1572]: time="2026-04-16T02:10:57.467971283Z" level=info msg="connecting to shim d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:10:57.498627 kubelet[2470]: E0416 02:10:57.497864 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="3.2s" Apr 16 02:10:57.841946 systemd[1]: Started cri-containerd-8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454.scope - libcontainer container 8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454. Apr 16 02:10:58.125458 systemd[1]: Started cri-containerd-4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52.scope - libcontainer container 4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52. Apr 16 02:10:58.145628 systemd[1]: Started cri-containerd-d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6.scope - libcontainer container d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6. Apr 16 02:10:58.277033 kubelet[2470]: I0416 02:10:58.276897 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:10:58.306324 kubelet[2470]: E0416 02:10:58.305681 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Apr 16 02:10:58.541227 containerd[1572]: time="2026-04-16T02:10:58.541121466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\"" Apr 16 02:10:58.596934 kubelet[2470]: E0416 02:10:58.596836 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:10:58.612481 kubelet[2470]: E0416 02:10:58.612267 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:58.618008 containerd[1572]: time="2026-04-16T02:10:58.617828456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:014f1e99632b67d73d9d9321e27acea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52\"" Apr 16 02:10:58.623033 containerd[1572]: time="2026-04-16T02:10:58.622114152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\"" Apr 16 02:10:58.627253 kubelet[2470]: E0416 02:10:58.626073 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:58.628422 kubelet[2470]: E0416 02:10:58.628399 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:58.665226 containerd[1572]: time="2026-04-16T02:10:58.664855290Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 02:10:58.684367 containerd[1572]: time="2026-04-16T02:10:58.682885497Z" level=info msg="CreateContainer within sandbox \"4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 02:10:58.700452 containerd[1572]: time="2026-04-16T02:10:58.698800756Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 02:10:58.739612 containerd[1572]: time="2026-04-16T02:10:58.739050063Z" level=info msg="Container 42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:10:58.802792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769924614.mount: Deactivated successfully. Apr 16 02:10:58.813635 containerd[1572]: time="2026-04-16T02:10:58.812899115Z" level=info msg="Container 337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:10:58.826708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304695346.mount: Deactivated successfully. Apr 16 02:10:58.877221 containerd[1572]: time="2026-04-16T02:10:58.874794327Z" level=info msg="Container a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:10:58.918479 containerd[1572]: time="2026-04-16T02:10:58.916306184Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\"" Apr 16 02:10:58.931972 containerd[1572]: time="2026-04-16T02:10:58.929141849Z" level=info msg="CreateContainer within sandbox \"4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7\"" Apr 16 02:10:58.942259 containerd[1572]: time="2026-04-16T02:10:58.939264693Z" level=info msg="StartContainer for \"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\"" Apr 16 02:10:58.942259 containerd[1572]: time="2026-04-16T02:10:58.940682521Z" level=info msg="StartContainer for \"337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7\"" Apr 16 02:10:58.942259 containerd[1572]: time="2026-04-16T02:10:58.942062039Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\"" Apr 16 02:10:58.951950 containerd[1572]: time="2026-04-16T02:10:58.951821106Z" level=info msg="StartContainer for \"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\"" Apr 16 02:10:58.970585 containerd[1572]: time="2026-04-16T02:10:58.968097910Z" level=info msg="connecting to shim 337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7" address="unix:///run/containerd/s/e92997ef70a1d992502ca7a8c61448f6ce80dabdb6c1b1529f9c6e83e2565d3e" protocol=ttrpc version=3 Apr 16 02:10:58.972452 containerd[1572]: time="2026-04-16T02:10:58.971904810Z" level=info msg="connecting to shim 42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:10:59.000660 containerd[1572]: time="2026-04-16T02:10:59.000329209Z" level=info msg="connecting to shim a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:10:59.223676 systemd[1]: Started cri-containerd-a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590.scope - libcontainer container a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590. Apr 16 02:10:59.377416 systemd[1]: Started cri-containerd-42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228.scope - libcontainer container 42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228. Apr 16 02:10:59.425328 systemd[1]: Started cri-containerd-337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7.scope - libcontainer container 337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7. Apr 16 02:11:00.041987 containerd[1572]: time="2026-04-16T02:11:00.041904032Z" level=info msg="StartContainer for \"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\" returns successfully" Apr 16 02:11:00.075299 containerd[1572]: time="2026-04-16T02:11:00.073418293Z" level=info msg="StartContainer for \"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\" returns successfully" Apr 16 02:11:00.450634 containerd[1572]: time="2026-04-16T02:11:00.450363169Z" level=info msg="StartContainer for \"337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7\" returns successfully" Apr 16 02:11:00.475636 kubelet[2470]: E0416 02:11:00.474058 2470 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:11:00.517378 kubelet[2470]: E0416 02:11:00.517196 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:00.517819 kubelet[2470]: E0416 02:11:00.517775 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:00.620766 kubelet[2470]: E0416 02:11:00.619294 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:00.621510 kubelet[2470]: E0416 02:11:00.621487 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:00.636626 kubelet[2470]: E0416 02:11:00.635684 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:00.650985 kubelet[2470]: E0416 02:11:00.643068 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:00.704587 kubelet[2470]: E0416 02:11:00.703955 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="6.4s" Apr 16 02:11:00.792391 kubelet[2470]: E0416 02:11:00.792315 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:11:01.640713 kubelet[2470]: I0416 02:11:01.640611 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:11:01.736630 kubelet[2470]: E0416 02:11:01.736046 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:01.740193 kubelet[2470]: E0416 02:11:01.740051 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:01.743467 kubelet[2470]: E0416 02:11:01.742117 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:01.743467 kubelet[2470]: E0416 02:11:01.742424 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:01.756197 kubelet[2470]: E0416 02:11:01.742128 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:01.789939 kubelet[2470]: E0416 02:11:01.789765 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:02.814714 kubelet[2470]: E0416 02:11:02.810219 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:02.814714 kubelet[2470]: E0416 02:11:02.811273 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:02.815781 kubelet[2470]: E0416 02:11:02.815603 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:02.816149 kubelet[2470]: E0416 02:11:02.816069 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:03.392721 kubelet[2470]: E0416 02:11:03.392526 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:03.394167 kubelet[2470]: E0416 02:11:03.394040 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:03.971169 kubelet[2470]: E0416 02:11:03.971026 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:03.972030 kubelet[2470]: E0416 02:11:03.971903 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:05.016066 kubelet[2470]: E0416 02:11:05.005757 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:10.009199 kubelet[2470]: E0416 02:11:10.008798 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:10.009199 kubelet[2470]: E0416 02:11:10.009001 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:11.412717 kubelet[2470]: E0416 02:11:11.412171 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:11:11.666081 kubelet[2470]: E0416 02:11:11.656439 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 02:11:12.411277 kubelet[2470]: E0416 02:11:12.411216 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:11:13.544100 kubelet[2470]: E0416 02:11:13.543766 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:13.544981 kubelet[2470]: E0416 02:11:13.544928 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:14.471902 kubelet[2470]: E0416 02:11:14.470244 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:11:15.006625 kubelet[2470]: E0416 02:11:15.006431 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:17.123136 kubelet[2470]: E0416 02:11:17.118678 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 02:11:17.162899 kubelet[2470]: E0416 02:11:17.133188 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:11:18.115738 kubelet[2470]: I0416 02:11:18.115442 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:11:19.028518 kubelet[2470]: E0416 02:11:19.028195 2470 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:11:22.537599 kubelet[2470]: E0416 02:11:22.537318 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:11:22.538321 kubelet[2470]: E0416 02:11:22.538299 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:23.319224 kubelet[2470]: E0416 02:11:23.318422 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:11:25.015909 kubelet[2470]: E0416 02:11:25.013035 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:28.229611 kubelet[2470]: E0416 02:11:28.228103 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 02:11:32.928288 kubelet[2470]: E0416 02:11:32.927327 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:11:33.422235 kubelet[2470]: E0416 02:11:33.422122 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:11:33.646986 kubelet[2470]: E0416 02:11:33.646830 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:11:34.139102 kubelet[2470]: E0416 02:11:34.138339 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 02:11:35.029239 kubelet[2470]: E0416 02:11:35.029107 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:35.240765 kubelet[2470]: I0416 02:11:35.240657 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:11:37.204040 kubelet[2470]: E0416 02:11:37.201123 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:11:45.033371 kubelet[2470]: E0416 02:11:45.033222 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:45.280369 kubelet[2470]: E0416 02:11:45.280191 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 02:11:45.806865 kubelet[2470]: E0416 02:11:45.806637 2470 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:11:45.807347 kubelet[2470]: E0416 02:11:45.807043 2470 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:11:51.141628 kubelet[2470]: E0416 02:11:51.141360 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 02:11:52.361042 kubelet[2470]: I0416 02:11:52.360689 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:11:55.035789 kubelet[2470]: E0416 02:11:55.035710 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:11:57.216994 kubelet[2470]: E0416 02:11:57.216664 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:11:58.942301 kubelet[2470]: E0416 02:11:58.941427 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:12:00.167327 kubelet[2470]: E0416 02:12:00.165515 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:12:00.175366 kubelet[2470]: E0416 02:12:00.175000 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:12:02.381854 kubelet[2470]: E0416 02:12:02.380239 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 02:12:04.130166 kubelet[2470]: E0416 02:12:04.129870 2470 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:12:05.068112 kubelet[2470]: E0416 02:12:05.061268 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:12:08.211711 kubelet[2470]: E0416 02:12:08.211049 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 02:12:09.502229 kubelet[2470]: I0416 02:12:09.502133 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:12:15.071884 kubelet[2470]: E0416 02:12:15.071663 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:12:16.683602 kubelet[2470]: E0416 02:12:16.682830 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:12:16.709597 kubelet[2470]: E0416 02:12:16.709183 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:17.254893 kubelet[2470]: E0416 02:12:17.245335 2470 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b46645ff417f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,LastTimestamp:2026-04-16 02:10:54.398259583 +0000 UTC m=+3.059777798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:12:19.511462 kubelet[2470]: E0416 02:12:19.511332 2470 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 02:12:25.167431 kubelet[2470]: E0416 02:12:25.162126 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:12:25.235748 kubelet[2470]: E0416 02:12:25.235159 2470 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 02:12:26.620652 kubelet[2470]: I0416 02:12:26.593207 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:12:27.099758 kubelet[2470]: E0416 02:12:27.099589 2470 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 16 02:12:27.386689 kubelet[2470]: I0416 02:12:27.141426 2470 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:12:27.386689 kubelet[2470]: E0416 02:12:27.148175 2470 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 02:12:28.274730 kubelet[2470]: E0416 02:12:28.274612 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.403630 kubelet[2470]: E0416 02:12:28.403085 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.508692 kubelet[2470]: E0416 02:12:28.508275 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.611569 kubelet[2470]: E0416 02:12:28.610978 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.715052 kubelet[2470]: E0416 02:12:28.713725 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.815380 kubelet[2470]: E0416 02:12:28.815288 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:28.925742 kubelet[2470]: E0416 02:12:28.917143 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.056706 kubelet[2470]: E0416 02:12:29.028345 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.193678 kubelet[2470]: E0416 02:12:29.192418 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.307881 kubelet[2470]: E0416 02:12:29.294049 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.398451 kubelet[2470]: E0416 02:12:29.398175 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.508357 kubelet[2470]: E0416 02:12:29.508046 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.624655 kubelet[2470]: E0416 02:12:29.622475 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.730699 kubelet[2470]: E0416 02:12:29.724513 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.835923 kubelet[2470]: E0416 02:12:29.832655 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:29.935497 kubelet[2470]: E0416 02:12:29.935359 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.036240 kubelet[2470]: E0416 02:12:30.036021 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.157419 kubelet[2470]: E0416 02:12:30.155481 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.261063 kubelet[2470]: E0416 02:12:30.260531 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.405015 kubelet[2470]: E0416 02:12:30.404648 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.510317 kubelet[2470]: E0416 02:12:30.510071 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.618769 kubelet[2470]: E0416 02:12:30.617764 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.726963 kubelet[2470]: E0416 02:12:30.724945 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.828179 kubelet[2470]: E0416 02:12:30.827371 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:30.945075 kubelet[2470]: E0416 02:12:30.944002 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.047607 kubelet[2470]: E0416 02:12:31.046619 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.198465 kubelet[2470]: E0416 02:12:31.197705 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.310220 kubelet[2470]: E0416 02:12:31.309387 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.414334 kubelet[2470]: E0416 02:12:31.414018 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.548142 kubelet[2470]: E0416 02:12:31.542500 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.682535 kubelet[2470]: E0416 02:12:31.682282 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.786708 kubelet[2470]: E0416 02:12:31.783679 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:31.898339 kubelet[2470]: E0416 02:12:31.895908 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.007702 kubelet[2470]: E0416 02:12:32.006678 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.215151 kubelet[2470]: E0416 02:12:32.209922 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.337004 kubelet[2470]: E0416 02:12:32.335131 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.446960 kubelet[2470]: E0416 02:12:32.445026 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.612143 kubelet[2470]: E0416 02:12:32.608994 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.713698 kubelet[2470]: E0416 02:12:32.713068 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.816736 kubelet[2470]: E0416 02:12:32.816613 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:32.923278 kubelet[2470]: E0416 02:12:32.919441 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.029762 kubelet[2470]: E0416 02:12:33.027633 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.144620 kubelet[2470]: E0416 02:12:33.139708 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.294185 kubelet[2470]: E0416 02:12:33.294071 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.397303 kubelet[2470]: E0416 02:12:33.396959 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.498774 kubelet[2470]: E0416 02:12:33.498628 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.611689 kubelet[2470]: E0416 02:12:33.605071 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.721301 kubelet[2470]: E0416 02:12:33.714760 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.829114 kubelet[2470]: E0416 02:12:33.828904 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:33.938963 kubelet[2470]: E0416 02:12:33.935417 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.037497 kubelet[2470]: E0416 02:12:34.037235 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.143928 kubelet[2470]: E0416 02:12:34.143127 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.357495 kubelet[2470]: E0416 02:12:34.357093 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.522065 kubelet[2470]: E0416 02:12:34.515251 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.690028 kubelet[2470]: E0416 02:12:34.689276 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:34.843584 kubelet[2470]: E0416 02:12:34.841410 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.002314 kubelet[2470]: E0416 02:12:35.001691 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.110929 kubelet[2470]: E0416 02:12:35.110806 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.205299 kubelet[2470]: E0416 02:12:35.204882 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:12:35.214925 kubelet[2470]: E0416 02:12:35.214455 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.339304 kubelet[2470]: E0416 02:12:35.333378 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.469735 kubelet[2470]: E0416 02:12:35.469090 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.577783 kubelet[2470]: E0416 02:12:35.577699 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.895465 kubelet[2470]: E0416 02:12:35.891637 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:35.999123 kubelet[2470]: E0416 02:12:35.998256 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.103146 kubelet[2470]: E0416 02:12:36.102686 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.208306 kubelet[2470]: E0416 02:12:36.207922 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.311539 kubelet[2470]: E0416 02:12:36.311233 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.430796 kubelet[2470]: E0416 02:12:36.418318 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.552228 kubelet[2470]: E0416 02:12:36.552022 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.675688 kubelet[2470]: E0416 02:12:36.675127 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.780918 kubelet[2470]: E0416 02:12:36.780620 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.892711 kubelet[2470]: E0416 02:12:36.886854 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:36.991924 kubelet[2470]: E0416 02:12:36.991775 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.095479 kubelet[2470]: E0416 02:12:37.095325 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.216935 kubelet[2470]: E0416 02:12:37.216602 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.338863 kubelet[2470]: E0416 02:12:37.333504 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.495108 kubelet[2470]: E0416 02:12:37.494704 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.600408 kubelet[2470]: E0416 02:12:37.596216 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.718985 kubelet[2470]: E0416 02:12:37.714472 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.864441 kubelet[2470]: E0416 02:12:37.858519 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:37.999130 kubelet[2470]: E0416 02:12:37.997255 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.100985 kubelet[2470]: E0416 02:12:38.099159 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.207129 kubelet[2470]: E0416 02:12:38.206389 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.308679 kubelet[2470]: E0416 02:12:38.308294 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.424817 kubelet[2470]: E0416 02:12:38.423442 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.514753 kubelet[2470]: E0416 02:12:38.514016 2470 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 02:12:38.865806 kubelet[2470]: E0416 02:12:38.863131 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:38.994282 kubelet[2470]: E0416 02:12:38.994145 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.103657 kubelet[2470]: E0416 02:12:39.102107 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.211506 kubelet[2470]: E0416 02:12:39.204683 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.319304 kubelet[2470]: E0416 02:12:39.312035 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.432047 kubelet[2470]: E0416 02:12:39.431336 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.537702 kubelet[2470]: E0416 02:12:39.536495 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.639588 kubelet[2470]: E0416 02:12:39.639419 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.764601 kubelet[2470]: E0416 02:12:39.764045 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.878536 kubelet[2470]: E0416 02:12:39.873488 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:39.982324 kubelet[2470]: E0416 02:12:39.977459 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.088681 kubelet[2470]: E0416 02:12:40.087106 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.215181 kubelet[2470]: E0416 02:12:40.214178 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.317780 kubelet[2470]: E0416 02:12:40.317440 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.425009 kubelet[2470]: E0416 02:12:40.422681 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.528601 kubelet[2470]: E0416 02:12:40.527756 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.686444 kubelet[2470]: E0416 02:12:40.674313 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.793837 kubelet[2470]: E0416 02:12:40.793153 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:40.899135 kubelet[2470]: E0416 02:12:40.898746 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.022760 kubelet[2470]: E0416 02:12:41.022605 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.129167 kubelet[2470]: E0416 02:12:41.126780 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.235457 kubelet[2470]: E0416 02:12:41.235269 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.344282 kubelet[2470]: E0416 02:12:41.342471 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.483321 kubelet[2470]: E0416 02:12:41.482201 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.603038 kubelet[2470]: E0416 02:12:41.601917 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.712039 kubelet[2470]: E0416 02:12:41.705122 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.817197 kubelet[2470]: E0416 02:12:41.815359 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:41.961030 kubelet[2470]: E0416 02:12:41.916884 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.074741 kubelet[2470]: E0416 02:12:42.069508 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.175594 kubelet[2470]: E0416 02:12:42.175256 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.321137 kubelet[2470]: E0416 02:12:42.318406 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.423758 kubelet[2470]: E0416 02:12:42.419726 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.527321 kubelet[2470]: E0416 02:12:42.527162 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.712720 kubelet[2470]: E0416 02:12:42.712176 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.815884 kubelet[2470]: E0416 02:12:42.815437 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:42.922754 kubelet[2470]: E0416 02:12:42.922481 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.039004 kubelet[2470]: E0416 02:12:43.034782 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.140486 kubelet[2470]: E0416 02:12:43.140257 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.243821 kubelet[2470]: E0416 02:12:43.241739 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.345391 kubelet[2470]: E0416 02:12:43.342034 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.482193 kubelet[2470]: E0416 02:12:43.482028 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.591578 kubelet[2470]: E0416 02:12:43.591329 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.682425 kubelet[2470]: E0416 02:12:43.674190 2470 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:12:43.682425 kubelet[2470]: E0416 02:12:43.677151 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:43.725984 kubelet[2470]: E0416 02:12:43.725245 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.837749 kubelet[2470]: E0416 02:12:43.833320 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:43.941639 kubelet[2470]: E0416 02:12:43.939299 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.051208 kubelet[2470]: E0416 02:12:44.048423 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.169082 kubelet[2470]: E0416 02:12:44.165676 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.355268 kubelet[2470]: E0416 02:12:44.354858 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.496406 kubelet[2470]: E0416 02:12:44.492886 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.599055 kubelet[2470]: E0416 02:12:44.598847 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.707739 kubelet[2470]: E0416 02:12:44.707443 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.809421 kubelet[2470]: E0416 02:12:44.809108 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:44.924688 kubelet[2470]: E0416 02:12:44.915174 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.031964 kubelet[2470]: E0416 02:12:45.031504 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.136480 kubelet[2470]: E0416 02:12:45.136337 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.229199 kubelet[2470]: E0416 02:12:45.229095 2470 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:12:45.304048 kubelet[2470]: E0416 02:12:45.303516 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.411206 kubelet[2470]: E0416 02:12:45.409426 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.535292 kubelet[2470]: E0416 02:12:45.535021 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.639763 kubelet[2470]: E0416 02:12:45.636534 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.744929 kubelet[2470]: E0416 02:12:45.743822 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:45.903932 kubelet[2470]: E0416 02:12:45.898325 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.025124 kubelet[2470]: E0416 02:12:46.021916 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.130615 kubelet[2470]: E0416 02:12:46.127488 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.232204 kubelet[2470]: E0416 02:12:46.228808 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.340646 kubelet[2470]: E0416 02:12:46.337970 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.501693 kubelet[2470]: E0416 02:12:46.500614 2470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:12:46.634173 kubelet[2470]: I0416 02:12:46.634037 2470 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:12:47.310744 kubelet[2470]: I0416 02:12:47.310681 2470 apiserver.go:52] "Watching apiserver" Apr 16 02:12:47.382453 kubelet[2470]: I0416 02:12:47.382259 2470 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:12:47.432442 kubelet[2470]: E0416 02:12:47.432153 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:47.432442 kubelet[2470]: I0416 02:12:47.432177 2470 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:12:47.698189 kubelet[2470]: I0416 02:12:47.691535 2470 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:12:47.870851 kubelet[2470]: I0416 02:12:47.870712 2470 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:12:48.011750 kubelet[2470]: E0416 02:12:48.011576 2470 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 02:12:48.014684 kubelet[2470]: E0416 02:12:48.014574 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:48.082516 kubelet[2470]: E0416 02:12:48.082194 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:48.637623 kubelet[2470]: E0416 02:12:48.634296 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:54.441739 kubelet[2470]: E0416 02:12:54.439731 2470 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 16 02:12:56.143139 kubelet[2470]: I0416 02:12:56.140725 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.14006531 podStartE2EDuration="9.14006531s" podCreationTimestamp="2026-04-16 02:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:12:56.081397646 +0000 UTC m=+124.742915844" watchObservedRunningTime="2026-04-16 02:12:56.14006531 +0000 UTC m=+124.801583504" Apr 16 02:12:56.173066 kubelet[2470]: E0416 02:12:56.171982 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:12:57.401107 kubelet[2470]: I0416 02:12:57.400872 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.400818258 podStartE2EDuration="10.400818258s" podCreationTimestamp="2026-04-16 02:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:12:57.400771755 +0000 UTC m=+126.062289947" watchObservedRunningTime="2026-04-16 02:12:57.400818258 +0000 UTC m=+126.062336449" Apr 16 02:12:58.086731 kubelet[2470]: I0416 02:12:58.084449 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=11.083313806 podStartE2EDuration="11.083313806s" podCreationTimestamp="2026-04-16 02:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:12:58.081274196 +0000 UTC m=+126.742792391" watchObservedRunningTime="2026-04-16 02:12:58.083313806 +0000 UTC m=+126.744831996" Apr 16 02:13:01.202671 kubelet[2470]: E0416 02:13:01.201963 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:06.246049 kubelet[2470]: E0416 02:13:06.245763 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:11.070059 systemd[1]: cri-containerd-42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228.scope: Deactivated successfully. Apr 16 02:13:11.075290 systemd[1]: cri-containerd-42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228.scope: Consumed 4.495s CPU time, 22.2M memory peak. Apr 16 02:13:11.138055 containerd[1572]: time="2026-04-16T02:13:11.137925731Z" level=info msg="received container exit event container_id:\"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\" id:\"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\" pid:2706 exit_status:1 exited_at:{seconds:1776305591 nanos:135833484}" Apr 16 02:13:11.299909 kubelet[2470]: E0416 02:13:11.267246 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:11.396768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228-rootfs.mount: Deactivated successfully. Apr 16 02:13:12.081267 kubelet[2470]: I0416 02:13:12.080005 2470 scope.go:117] "RemoveContainer" containerID="42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228" Apr 16 02:13:12.087092 kubelet[2470]: E0416 02:13:12.086495 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:12.195250 containerd[1572]: time="2026-04-16T02:13:12.195099629Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 02:13:12.304965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199722572.mount: Deactivated successfully. Apr 16 02:13:12.328596 containerd[1572]: time="2026-04-16T02:13:12.320821667Z" level=info msg="Container 6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:13:12.517756 containerd[1572]: time="2026-04-16T02:13:12.516372068Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\"" Apr 16 02:13:12.534804 containerd[1572]: time="2026-04-16T02:13:12.534702668Z" level=info msg="StartContainer for \"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\"" Apr 16 02:13:12.540671 containerd[1572]: time="2026-04-16T02:13:12.539191431Z" level=info msg="connecting to shim 6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:13:12.946982 systemd[1]: Started cri-containerd-6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129.scope - libcontainer container 6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129. Apr 16 02:13:14.133146 containerd[1572]: time="2026-04-16T02:13:14.133074527Z" level=info msg="StartContainer for \"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\" returns successfully" Apr 16 02:13:14.723222 kubelet[2470]: E0416 02:13:14.723022 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:16.342073 kubelet[2470]: E0416 02:13:16.338450 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:21.444079 kubelet[2470]: E0416 02:13:21.440883 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:23.439640 kubelet[2470]: E0416 02:13:23.437152 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:26.541429 kubelet[2470]: E0416 02:13:26.541134 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:31.669608 kubelet[2470]: E0416 02:13:31.668348 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:33.565662 kubelet[2470]: E0416 02:13:33.565146 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:36.720411 kubelet[2470]: E0416 02:13:36.720069 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:40.917252 kubelet[2470]: E0416 02:13:40.917121 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:41.784312 kubelet[2470]: E0416 02:13:41.780161 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:46.796760 kubelet[2470]: E0416 02:13:46.796429 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:49.891880 systemd[1]: Reload requested from client PID 2818 ('systemctl') (unit session-9.scope)... Apr 16 02:13:49.891911 systemd[1]: Reloading... Apr 16 02:13:50.622009 kubelet[2470]: E0416 02:13:50.621758 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:50.989825 zram_generator::config[2858]: No configuration found. Apr 16 02:13:51.424087 kubelet[2470]: E0416 02:13:51.420480 2470 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:51.842653 kubelet[2470]: E0416 02:13:51.842140 2470 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:52.902808 systemd[1]: Reloading finished in 3006 ms. Apr 16 02:13:53.236713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:13:53.327941 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 02:13:53.337252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:13:53.337985 systemd[1]: kubelet.service: Consumed 43.392s CPU time, 136.3M memory peak. Apr 16 02:13:53.385459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:13:54.551672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:13:54.629217 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:13:55.911796 kubelet[2905]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:13:55.911796 kubelet[2905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:13:55.911796 kubelet[2905]: I0416 02:13:55.910438 2905 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:13:56.023926 kubelet[2905]: I0416 02:13:56.023160 2905 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:13:56.023926 kubelet[2905]: I0416 02:13:56.023450 2905 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:13:56.023926 kubelet[2905]: I0416 02:13:56.023520 2905 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:13:56.023926 kubelet[2905]: I0416 02:13:56.023538 2905 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:13:56.093140 kubelet[2905]: I0416 02:13:56.088293 2905 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:13:56.240236 kubelet[2905]: I0416 02:13:56.240065 2905 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 02:13:56.514994 kubelet[2905]: I0416 02:13:56.507021 2905 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:13:56.727129 kubelet[2905]: I0416 02:13:56.726765 2905 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:13:56.862435 kubelet[2905]: I0416 02:13:56.855907 2905 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:13:56.862435 kubelet[2905]: I0416 02:13:56.859457 2905 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:13:56.867050 kubelet[2905]: I0416 02:13:56.860598 2905 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:13:56.867050 kubelet[2905]: I0416 02:13:56.866213 2905 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:13:56.867050 kubelet[2905]: I0416 02:13:56.866352 2905 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:13:56.867050 kubelet[2905]: I0416 02:13:56.866515 2905 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:13:56.869959 kubelet[2905]: I0416 02:13:56.868959 2905 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:13:56.870916 kubelet[2905]: I0416 02:13:56.870821 2905 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:13:56.870916 kubelet[2905]: I0416 02:13:56.870861 2905 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:13:56.873207 kubelet[2905]: I0416 02:13:56.870934 2905 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:13:56.873207 kubelet[2905]: I0416 02:13:56.870951 2905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:13:56.904657 kubelet[2905]: I0416 02:13:56.904607 2905 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:13:56.928442 kubelet[2905]: I0416 02:13:56.928148 2905 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:13:56.935283 kubelet[2905]: I0416 02:13:56.934460 2905 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:13:57.136355 kubelet[2905]: I0416 02:13:57.136176 2905 server.go:1262] "Started kubelet" Apr 16 02:13:57.175480 kubelet[2905]: I0416 02:13:57.174876 2905 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:13:57.183783 kubelet[2905]: I0416 02:13:57.181838 2905 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:13:57.183783 kubelet[2905]: I0416 02:13:57.181917 2905 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:13:57.195809 kubelet[2905]: I0416 02:13:57.191114 2905 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:13:57.219888 kubelet[2905]: I0416 02:13:57.217377 2905 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:13:57.389903 kubelet[2905]: I0416 02:13:57.388148 2905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:13:57.417684 kubelet[2905]: I0416 02:13:57.401393 2905 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:13:57.675693 kubelet[2905]: I0416 02:13:57.669430 2905 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:13:57.715237 kubelet[2905]: E0416 02:13:57.635474 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:13:57.739688 kubelet[2905]: I0416 02:13:57.678875 2905 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:13:57.742853 kubelet[2905]: I0416 02:13:57.741039 2905 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:13:57.892651 kubelet[2905]: I0416 02:13:57.891951 2905 apiserver.go:52] "Watching apiserver" Apr 16 02:13:57.963891 kubelet[2905]: I0416 02:13:57.962977 2905 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:13:57.973042 kubelet[2905]: I0416 02:13:57.972367 2905 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:13:57.984614 kubelet[2905]: I0416 02:13:57.983932 2905 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:13:57.997412 kubelet[2905]: E0416 02:13:57.996975 2905 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:13:58.009814 kubelet[2905]: I0416 02:13:58.003679 2905 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:13:58.100952 kubelet[2905]: I0416 02:13:58.077844 2905 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:13:58.100952 kubelet[2905]: I0416 02:13:58.078306 2905 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:13:58.100952 kubelet[2905]: I0416 02:13:58.079206 2905 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:13:58.156636 kubelet[2905]: E0416 02:13:58.156458 2905 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:13:58.272629 kubelet[2905]: E0416 02:13:58.271781 2905 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:13:58.519511 kubelet[2905]: E0416 02:13:58.479144 2905 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:13:58.916917 kubelet[2905]: E0416 02:13:58.916739 2905 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:13:59.724924 kubelet[2905]: E0416 02:13:59.722972 2905 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:14:00.015028 kubelet[2905]: I0416 02:14:00.014973 2905 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:14:00.015339 kubelet[2905]: I0416 02:14:00.015327 2905 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:14:00.015453 kubelet[2905]: I0416 02:14:00.015447 2905 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:14:00.029952 kubelet[2905]: I0416 02:14:00.029867 2905 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 02:14:00.031785 kubelet[2905]: I0416 02:14:00.031467 2905 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 02:14:00.040665 kubelet[2905]: I0416 02:14:00.032087 2905 policy_none.go:49] "None policy: Start" Apr 16 02:14:00.077489 kubelet[2905]: I0416 02:14:00.060226 2905 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:14:00.077489 kubelet[2905]: I0416 02:14:00.069336 2905 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:14:00.102490 kubelet[2905]: I0416 02:14:00.101884 2905 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 02:14:00.105814 kubelet[2905]: I0416 02:14:00.105349 2905 policy_none.go:47] "Start" Apr 16 02:14:00.233222 kubelet[2905]: E0416 02:14:00.232712 2905 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:14:00.271834 kubelet[2905]: I0416 02:14:00.263533 2905 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:14:00.275452 kubelet[2905]: I0416 02:14:00.275070 2905 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:14:00.316667 kubelet[2905]: I0416 02:14:00.314525 2905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:14:00.422051 kubelet[2905]: E0416 02:14:00.421848 2905 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:14:00.916591 kubelet[2905]: I0416 02:14:00.913479 2905 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:14:01.410710 kubelet[2905]: I0416 02:14:01.407347 2905 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 02:14:01.415787 kubelet[2905]: I0416 02:14:01.415734 2905 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:14:01.418615 kubelet[2905]: I0416 02:14:01.418214 2905 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:14:01.425963 kubelet[2905]: I0416 02:14:01.424841 2905 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.473695 kubelet[2905]: I0416 02:14:01.472442 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.474232 kubelet[2905]: I0416 02:14:01.474118 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.479233 kubelet[2905]: I0416 02:14:01.477059 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.503422 kubelet[2905]: I0416 02:14:01.472731 2905 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:14:01.508530 kubelet[2905]: I0416 02:14:01.507932 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.534227 kubelet[2905]: I0416 02:14:01.534045 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:01.551132 kubelet[2905]: I0416 02:14:01.543451 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:14:01.555915 kubelet[2905]: I0416 02:14:01.555810 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:14:01.556321 kubelet[2905]: I0416 02:14:01.556306 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:14:01.565429 kubelet[2905]: I0416 02:14:01.565107 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014f1e99632b67d73d9d9321e27acea7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"014f1e99632b67d73d9d9321e27acea7\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:14:01.835493 kubelet[2905]: E0416 02:14:01.835288 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:02.416013 kubelet[2905]: E0416 02:14:02.413870 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:02.709541 kubelet[2905]: E0416 02:14:02.708965 2905 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:14:02.738725 kubelet[2905]: E0416 02:14:02.737829 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:02.738725 kubelet[2905]: E0416 02:14:02.738434 2905 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 02:14:02.804748 kubelet[2905]: E0416 02:14:02.802424 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:03.027607 sudo[2947]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 02:14:03.027946 sudo[2947]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 02:14:03.907825 kubelet[2905]: E0416 02:14:03.905963 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:03.907825 kubelet[2905]: E0416 02:14:03.906477 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:03.988792 kubelet[2905]: E0416 02:14:03.986423 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:05.015623 kubelet[2905]: E0416 02:14:05.012948 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:05.019157 sudo[2947]: pam_unix(sudo:session): session closed for user root Apr 16 02:14:05.026315 kubelet[2905]: E0416 02:14:05.025855 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:05.042867 kubelet[2905]: E0416 02:14:05.038251 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:12.622977 kubelet[2905]: E0416 02:14:12.622523 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:13.875681 kubelet[2905]: E0416 02:14:13.875228 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:14:14.120653 kubelet[2905]: E0416 02:14:14.120490 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:00.100420 sudo[1800]: pam_unix(sudo:session): session closed for user root Apr 16 02:15:00.108918 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Apr 16 02:15:00.147877 sshd[1799]: Connection closed by 10.0.0.1 port 35284 Apr 16 02:15:00.163885 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:35284.service: Deactivated successfully. Apr 16 02:15:00.234261 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 02:15:00.235030 systemd[1]: session-9.scope: Consumed 29.105s CPU time, 276.7M memory peak. Apr 16 02:15:00.251536 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Apr 16 02:15:00.275126 systemd-logind[1559]: Removed session 9. Apr 16 02:15:07.215752 kubelet[2905]: E0416 02:15:07.213450 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:15.698755 kubelet[2905]: I0416 02:15:15.698655 2905 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 02:15:15.707437 containerd[1572]: time="2026-04-16T02:15:15.707119831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 02:15:15.714684 kubelet[2905]: I0416 02:15:15.713399 2905 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 02:15:18.250656 kubelet[2905]: E0416 02:15:18.248803 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:30.088202 kubelet[2905]: I0416 02:15:30.084161 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226fg\" (UniqueName: \"kubernetes.io/projected/01b7e1ef-3931-4b46-8f70-ce88202dc972-kube-api-access-226fg\") pod \"cilium-operator-6f9c7c5859-fqp9w\" (UID: \"01b7e1ef-3931-4b46-8f70-ce88202dc972\") " pod="kube-system/cilium-operator-6f9c7c5859-fqp9w" Apr 16 02:15:30.088202 kubelet[2905]: I0416 02:15:30.087198 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fqp9w\" (UID: \"01b7e1ef-3931-4b46-8f70-ce88202dc972\") " pod="kube-system/cilium-operator-6f9c7c5859-fqp9w" Apr 16 02:15:30.137650 kubelet[2905]: E0416 02:15:30.134887 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:30.168535 systemd[1]: Created slice kubepods-besteffort-pod01b7e1ef_3931_4b46_8f70_ce88202dc972.slice - libcontainer container kubepods-besteffort-pod01b7e1ef_3931_4b46_8f70_ce88202dc972.slice. Apr 16 02:15:31.376628 kubelet[2905]: E0416 02:15:31.374530 2905 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 16 02:15:31.378597 kubelet[2905]: E0416 02:15:31.378318 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path podName:01b7e1ef-3931-4b46-8f70-ce88202dc972 nodeName:}" failed. No retries permitted until 2026-04-16 02:15:31.878207379 +0000 UTC m=+97.170095925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path") pod "cilium-operator-6f9c7c5859-fqp9w" (UID: "01b7e1ef-3931-4b46-8f70-ce88202dc972") : failed to sync configmap cache: timed out waiting for the condition Apr 16 02:15:34.169609 kubelet[2905]: E0416 02:15:34.160923 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:34.248791 containerd[1572]: time="2026-04-16T02:15:34.247216390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fqp9w,Uid:01b7e1ef-3931-4b46-8f70-ce88202dc972,Namespace:kube-system,Attempt:0,}" Apr 16 02:15:34.805644 containerd[1572]: time="2026-04-16T02:15:34.805539795Z" level=info msg="connecting to shim 4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c" address="unix:///run/containerd/s/660c9f17e10a82e31ec8d7c1d115eeb12baa8d21aa12e4080ec56b1037d0d02a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:15:36.286456 systemd[1]: Started cri-containerd-4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c.scope - libcontainer container 4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c. Apr 16 02:15:37.381773 kubelet[2905]: I0416 02:15:37.380869 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-bpf-maps\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.394378 systemd[1]: Created slice kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice - libcontainer container kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice. Apr 16 02:15:37.413441 kubelet[2905]: I0416 02:15:37.408583 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-cgroup\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.417380 kubelet[2905]: I0416 02:15:37.415939 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-xtables-lock\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.435522 kubelet[2905]: I0416 02:15:37.424477 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.442876 kubelet[2905]: I0416 02:15:37.429513 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hostproc\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.555679 kubelet[2905]: I0416 02:15:37.554533 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cni-path\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.569629 kubelet[2905]: I0416 02:15:37.554753 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-etc-cni-netd\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.574327 kubelet[2905]: E0416 02:15:37.574218 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Apr 16 02:15:37.574698 kubelet[2905]: E0416 02:15:37.574386 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Apr 16 02:15:37.574800 kubelet[2905]: I0416 02:15:37.574577 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-config-path\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.574800 kubelet[2905]: I0416 02:15:37.574772 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.576657 kubelet[2905]: I0416 02:15:37.575027 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-run\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.589419 kubelet[2905]: I0416 02:15:37.588094 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-kernel\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.609710 kubelet[2905]: I0416 02:15:37.605700 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qst\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-kube-api-access-n8qst\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.724454 kubelet[2905]: I0416 02:15:37.722117 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-lib-modules\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:37.724454 kubelet[2905]: I0416 02:15:37.722310 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-net\") pod \"cilium-qdfxn\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " pod="kube-system/cilium-qdfxn" Apr 16 02:15:38.090838 kubelet[2905]: I0416 02:15:38.085021 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-xtables-lock\") pod \"kube-proxy-87plx\" (UID: \"9a69db3b-ee2d-4a0f-bd53-c7b15661e39b\") " pod="kube-system/kube-proxy-87plx" Apr 16 02:15:38.179583 kubelet[2905]: I0416 02:15:38.179315 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-lib-modules\") pod \"kube-proxy-87plx\" (UID: \"9a69db3b-ee2d-4a0f-bd53-c7b15661e39b\") " pod="kube-system/kube-proxy-87plx" Apr 16 02:15:38.180270 kubelet[2905]: I0416 02:15:38.180251 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-kube-proxy\") pod \"kube-proxy-87plx\" (UID: \"9a69db3b-ee2d-4a0f-bd53-c7b15661e39b\") " pod="kube-system/kube-proxy-87plx" Apr 16 02:15:38.180380 kubelet[2905]: I0416 02:15:38.180367 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nndrg\" (UniqueName: \"kubernetes.io/projected/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-kube-api-access-nndrg\") pod \"kube-proxy-87plx\" (UID: \"9a69db3b-ee2d-4a0f-bd53-c7b15661e39b\") " pod="kube-system/kube-proxy-87plx" Apr 16 02:15:38.185277 systemd[1]: Created slice kubepods-besteffort-pod9a69db3b_ee2d_4a0f_bd53_c7b15661e39b.slice - libcontainer container kubepods-besteffort-pod9a69db3b_ee2d_4a0f_bd53_c7b15661e39b.slice. Apr 16 02:15:38.442987 containerd[1572]: time="2026-04-16T02:15:38.437418541Z" level=error msg="get state for 4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c" error="context deadline exceeded" Apr 16 02:15:38.500525 containerd[1572]: time="2026-04-16T02:15:38.499974083Z" level=warning msg="unknown status" status=0 Apr 16 02:15:38.977521 kubelet[2905]: E0416 02:15:38.975248 2905 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:38.996340 kubelet[2905]: E0416 02:15:38.939153 2905 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:38.999604 kubelet[2905]: E0416 02:15:38.999450 2905 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qdfxn: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:39.001082 kubelet[2905]: E0416 02:15:38.998429 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets podName:7fa6eadd-c61c-46c9-a233-f61300b39bd5 nodeName:}" failed. No retries permitted until 2026-04-16 02:15:39.498269961 +0000 UTC m=+104.790158517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets") pod "cilium-qdfxn" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5") : failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:39.005887 kubelet[2905]: E0416 02:15:39.004110 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls podName:7fa6eadd-c61c-46c9-a233-f61300b39bd5 nodeName:}" failed. No retries permitted until 2026-04-16 02:15:39.503518179 +0000 UTC m=+104.795406726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls") pod "cilium-qdfxn" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5") : failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:39.333110 kubelet[2905]: E0416 02:15:39.326762 2905 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 16 02:15:39.333110 kubelet[2905]: E0416 02:15:39.326900 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-kube-proxy podName:9a69db3b-ee2d-4a0f-bd53-c7b15661e39b nodeName:}" failed. No retries permitted until 2026-04-16 02:15:39.826874969 +0000 UTC m=+105.118763517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9a69db3b-ee2d-4a0f-bd53-c7b15661e39b-kube-proxy") pod "kube-proxy-87plx" (UID: "9a69db3b-ee2d-4a0f-bd53-c7b15661e39b") : failed to sync configmap cache: timed out waiting for the condition Apr 16 02:15:39.990743 containerd[1572]: time="2026-04-16T02:15:39.983775994Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:15:40.303083 containerd[1572]: time="2026-04-16T02:15:40.299349553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fqp9w,Uid:01b7e1ef-3931-4b46-8f70-ce88202dc972,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\"" Apr 16 02:15:40.536293 kubelet[2905]: E0416 02:15:40.522890 2905 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:40.536293 kubelet[2905]: E0416 02:15:40.522966 2905 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qdfxn: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:40.536293 kubelet[2905]: E0416 02:15:40.523217 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls podName:7fa6eadd-c61c-46c9-a233-f61300b39bd5 nodeName:}" failed. No retries permitted until 2026-04-16 02:15:41.523133642 +0000 UTC m=+106.815022202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls") pod "cilium-qdfxn" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5") : failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:40.536293 kubelet[2905]: E0416 02:15:40.522049 2905 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:40.536293 kubelet[2905]: E0416 02:15:40.523265 2905 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets podName:7fa6eadd-c61c-46c9-a233-f61300b39bd5 nodeName:}" failed. No retries permitted until 2026-04-16 02:15:41.523256852 +0000 UTC m=+106.815145413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets") pod "cilium-qdfxn" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5") : failed to sync secret cache: timed out waiting for the condition Apr 16 02:15:40.630517 kubelet[2905]: E0416 02:15:40.536349 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:40.686341 containerd[1572]: time="2026-04-16T02:15:40.686224565Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 02:15:42.085515 kubelet[2905]: E0416 02:15:42.085214 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:42.092262 containerd[1572]: time="2026-04-16T02:15:42.091515192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdfxn,Uid:7fa6eadd-c61c-46c9-a233-f61300b39bd5,Namespace:kube-system,Attempt:0,}" Apr 16 02:15:43.418414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228618315.mount: Deactivated successfully. Apr 16 02:15:44.668415 containerd[1572]: time="2026-04-16T02:15:44.664178932Z" level=info msg="connecting to shim 51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:15:45.138644 kubelet[2905]: E0416 02:15:45.133851 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:45.147597 containerd[1572]: time="2026-04-16T02:15:45.145482870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87plx,Uid:9a69db3b-ee2d-4a0f-bd53-c7b15661e39b,Namespace:kube-system,Attempt:0,}" Apr 16 02:15:45.172104 kubelet[2905]: E0416 02:15:45.169246 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.074s" Apr 16 02:15:45.757891 systemd[1]: Started cri-containerd-51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6.scope - libcontainer container 51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6. Apr 16 02:15:46.281319 containerd[1572]: time="2026-04-16T02:15:46.281243145Z" level=info msg="connecting to shim 8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32" address="unix:///run/containerd/s/4847be2302fb9ed5bb6a20610560c3b64790892622013d2d20b6baac6d5ec30a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:15:47.149358 containerd[1572]: time="2026-04-16T02:15:47.141983240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdfxn,Uid:7fa6eadd-c61c-46c9-a233-f61300b39bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\"" Apr 16 02:15:47.227296 kubelet[2905]: E0416 02:15:47.227184 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:47.240150 systemd[1]: Started cri-containerd-8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32.scope - libcontainer container 8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32. Apr 16 02:15:47.400335 kubelet[2905]: E0416 02:15:47.396910 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.221s" Apr 16 02:15:48.472152 containerd[1572]: time="2026-04-16T02:15:48.472074908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87plx,Uid:9a69db3b-ee2d-4a0f-bd53-c7b15661e39b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32\"" Apr 16 02:15:48.640755 kubelet[2905]: E0416 02:15:48.584232 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:49.273026 containerd[1572]: time="2026-04-16T02:15:49.268302959Z" level=info msg="CreateContainer within sandbox \"8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 02:15:49.467150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916885375.mount: Deactivated successfully. Apr 16 02:15:49.479620 containerd[1572]: time="2026-04-16T02:15:49.476197202Z" level=info msg="Container a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:15:49.480480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613765399.mount: Deactivated successfully. Apr 16 02:15:49.751772 containerd[1572]: time="2026-04-16T02:15:49.749025201Z" level=info msg="CreateContainer within sandbox \"8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331\"" Apr 16 02:15:49.763822 containerd[1572]: time="2026-04-16T02:15:49.762862341Z" level=info msg="StartContainer for \"a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331\"" Apr 16 02:15:49.787298 containerd[1572]: time="2026-04-16T02:15:49.787031315Z" level=info msg="connecting to shim a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331" address="unix:///run/containerd/s/4847be2302fb9ed5bb6a20610560c3b64790892622013d2d20b6baac6d5ec30a" protocol=ttrpc version=3 Apr 16 02:15:50.715701 systemd[1]: Started cri-containerd-a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331.scope - libcontainer container a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331. Apr 16 02:15:50.802389 systemd[1]: cri-containerd-6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129.scope: Deactivated successfully. Apr 16 02:15:50.806104 systemd[1]: cri-containerd-6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129.scope: Consumed 40.669s CPU time, 50.4M memory peak. Apr 16 02:15:50.833627 containerd[1572]: time="2026-04-16T02:15:50.831709721Z" level=info msg="received container exit event container_id:\"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\" id:\"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\" pid:2798 exit_status:1 exited_at:{seconds:1776305750 nanos:813771312}" Apr 16 02:15:52.786879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129-rootfs.mount: Deactivated successfully. Apr 16 02:15:53.568231 containerd[1572]: time="2026-04-16T02:15:53.567062674Z" level=info msg="StartContainer for \"a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331\" returns successfully" Apr 16 02:15:56.238821 kubelet[2905]: E0416 02:15:56.230973 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.834s" Apr 16 02:15:56.643539 systemd[1]: cri-containerd-a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590.scope: Deactivated successfully. Apr 16 02:15:56.713011 systemd[1]: cri-containerd-a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590.scope: Consumed 21.785s CPU time, 22.8M memory peak. Apr 16 02:15:56.736513 containerd[1572]: time="2026-04-16T02:15:56.725008143Z" level=info msg="received container exit event container_id:\"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\" id:\"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\" pid:2698 exit_status:1 exited_at:{seconds:1776305756 nanos:639161997}" Apr 16 02:15:56.737434 kubelet[2905]: I0416 02:15:56.722589 2905 scope.go:117] "RemoveContainer" containerID="42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228" Apr 16 02:15:56.765163 kubelet[2905]: E0416 02:15:56.765103 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:56.812741 kubelet[2905]: I0416 02:15:56.764937 2905 scope.go:117] "RemoveContainer" containerID="6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129" Apr 16 02:15:56.845023 kubelet[2905]: E0416 02:15:56.844910 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:56.950672 containerd[1572]: time="2026-04-16T02:15:56.947610984Z" level=info msg="RemoveContainer for \"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\"" Apr 16 02:15:57.185632 containerd[1572]: time="2026-04-16T02:15:57.167223899Z" level=info msg="RemoveContainer for \"42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228\" returns successfully" Apr 16 02:15:57.439836 containerd[1572]: time="2026-04-16T02:15:57.437302295Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 16 02:15:57.851170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590-rootfs.mount: Deactivated successfully. Apr 16 02:15:57.913370 containerd[1572]: time="2026-04-16T02:15:57.911254116Z" level=info msg="Container b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:15:58.207289 kubelet[2905]: I0416 02:15:58.204095 2905 scope.go:117] "RemoveContainer" containerID="6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129" Apr 16 02:15:58.278578 containerd[1572]: time="2026-04-16T02:15:58.278479805Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\"" Apr 16 02:15:58.298263 kubelet[2905]: E0416 02:15:58.296182 2905 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 16 02:15:58.493722 containerd[1572]: time="2026-04-16T02:15:58.491099233Z" level=info msg="StartContainer for \"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\"" Apr 16 02:15:58.550621 containerd[1572]: time="2026-04-16T02:15:58.550453358Z" level=warning msg="container event discarded" container=8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.598005 containerd[1572]: time="2026-04-16T02:15:58.597913721Z" level=warning msg="container event discarded" container=8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454 type=CONTAINER_STARTED_EVENT Apr 16 02:15:58.627824 containerd[1572]: time="2026-04-16T02:15:58.627022228Z" level=info msg="RemoveContainer for \"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\"" Apr 16 02:15:58.636374 containerd[1572]: time="2026-04-16T02:15:58.636277563Z" level=warning msg="container event discarded" container=4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.650482 containerd[1572]: time="2026-04-16T02:15:58.650394055Z" level=warning msg="container event discarded" container=4366e55fb3ab1fdb8980a6dd50bc34a4735f26e960d3230fb113208a8c6f0e52 type=CONTAINER_STARTED_EVENT Apr 16 02:15:58.651984 containerd[1572]: time="2026-04-16T02:15:58.651272394Z" level=warning msg="container event discarded" container=d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.657423 containerd[1572]: time="2026-04-16T02:15:58.657062600Z" level=warning msg="container event discarded" container=d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6 type=CONTAINER_STARTED_EVENT Apr 16 02:15:58.681540 containerd[1572]: time="2026-04-16T02:15:58.636846921Z" level=info msg="connecting to shim b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:15:58.888345 kubelet[2905]: I0416 02:15:58.881508 2905 scope.go:117] "RemoveContainer" containerID="a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590" Apr 16 02:15:58.926625 containerd[1572]: time="2026-04-16T02:15:58.923448815Z" level=warning msg="container event discarded" container=42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.928229 kubelet[2905]: E0416 02:15:58.926709 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:58.959395 systemd[1]: Started cri-containerd-b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d.scope - libcontainer container b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d. Apr 16 02:15:58.968323 containerd[1572]: time="2026-04-16T02:15:58.962478397Z" level=warning msg="container event discarded" container=337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.977921 containerd[1572]: time="2026-04-16T02:15:58.974807879Z" level=warning msg="container event discarded" container=a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590 type=CONTAINER_CREATED_EVENT Apr 16 02:15:58.977921 containerd[1572]: time="2026-04-16T02:15:58.974979994Z" level=info msg="RemoveContainer for \"6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129\" returns successfully" Apr 16 02:15:59.178603 kubelet[2905]: E0416 02:15:59.177693 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:15:59.425480 containerd[1572]: time="2026-04-16T02:15:59.412205278Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 02:15:59.543970 containerd[1572]: time="2026-04-16T02:15:59.543916682Z" level=info msg="Container 5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:15:59.720479 containerd[1572]: time="2026-04-16T02:15:59.720391723Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\"" Apr 16 02:15:59.724410 containerd[1572]: time="2026-04-16T02:15:59.723922935Z" level=info msg="StartContainer for \"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\" returns successfully" Apr 16 02:15:59.855187 containerd[1572]: time="2026-04-16T02:15:59.854784238Z" level=info msg="StartContainer for \"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\"" Apr 16 02:15:59.872856 containerd[1572]: time="2026-04-16T02:15:59.872801036Z" level=info msg="connecting to shim 5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:15:59.911124 containerd[1572]: time="2026-04-16T02:15:59.911040293Z" level=warning msg="container event discarded" container=42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228 type=CONTAINER_STARTED_EVENT Apr 16 02:16:00.096050 containerd[1572]: time="2026-04-16T02:16:00.095991701Z" level=warning msg="container event discarded" container=a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590 type=CONTAINER_STARTED_EVENT Apr 16 02:16:00.463489 containerd[1572]: time="2026-04-16T02:16:00.451462660Z" level=warning msg="container event discarded" container=337728579d48b12f093b27e02cc44cd5ee5660ab5b2351dd080450fb830808d7 type=CONTAINER_STARTED_EVENT Apr 16 02:16:00.765270 systemd[1]: Started cri-containerd-5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4.scope - libcontainer container 5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4. Apr 16 02:16:01.317628 kubelet[2905]: E0416 02:16:01.298106 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.215s" Apr 16 02:16:01.501417 kubelet[2905]: E0416 02:16:01.501323 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:01.747968 containerd[1572]: time="2026-04-16T02:16:01.747919561Z" level=info msg="StartContainer for \"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\" returns successfully" Apr 16 02:16:02.326591 containerd[1572]: time="2026-04-16T02:16:02.326406126Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:16:02.356713 containerd[1572]: time="2026-04-16T02:16:02.355118250Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 16 02:16:02.392324 containerd[1572]: time="2026-04-16T02:16:02.392096263Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:16:02.457116 containerd[1572]: time="2026-04-16T02:16:02.456991185Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 21.77068246s" Apr 16 02:16:02.457116 containerd[1572]: time="2026-04-16T02:16:02.457073634Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 16 02:16:02.826535 containerd[1572]: time="2026-04-16T02:16:02.826392789Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 02:16:03.221971 kubelet[2905]: E0416 02:16:03.207373 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:03.327616 containerd[1572]: time="2026-04-16T02:16:03.326456664Z" level=info msg="CreateContainer within sandbox \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 02:16:03.378596 kubelet[2905]: E0416 02:16:03.327347 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:03.499180 containerd[1572]: time="2026-04-16T02:16:03.499095433Z" level=info msg="Container 846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:16:03.572456 containerd[1572]: time="2026-04-16T02:16:03.572337261Z" level=info msg="CreateContainer within sandbox \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\"" Apr 16 02:16:03.842738 kubelet[2905]: E0416 02:16:03.837000 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:03.888621 containerd[1572]: time="2026-04-16T02:16:03.879237785Z" level=info msg="StartContainer for \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\"" Apr 16 02:16:03.888621 containerd[1572]: time="2026-04-16T02:16:03.883324088Z" level=info msg="connecting to shim 846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4" address="unix:///run/containerd/s/660c9f17e10a82e31ec8d7c1d115eeb12baa8d21aa12e4080ec56b1037d0d02a" protocol=ttrpc version=3 Apr 16 02:16:04.130575 systemd[1]: Started cri-containerd-846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4.scope - libcontainer container 846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4. Apr 16 02:16:05.733829 kubelet[2905]: E0416 02:16:05.732697 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:06.198767 containerd[1572]: time="2026-04-16T02:16:06.194427634Z" level=info msg="StartContainer for \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" returns successfully" Apr 16 02:16:07.628042 kubelet[2905]: E0416 02:16:07.626066 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:07.674965 kubelet[2905]: E0416 02:16:07.674768 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:08.891156 kubelet[2905]: E0416 02:16:08.890994 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:09.026680 kubelet[2905]: E0416 02:16:09.023180 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:13.266007 kubelet[2905]: E0416 02:16:13.265809 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:14.032082 kubelet[2905]: E0416 02:16:14.031976 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:14.060279 kubelet[2905]: E0416 02:16:14.059976 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:14.502743 kubelet[2905]: I0416 02:16:14.479413 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-87plx" podStartSLOduration=47.479391059 podStartE2EDuration="47.479391059s" podCreationTimestamp="2026-04-16 02:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:16:14.365398904 +0000 UTC m=+139.657287466" watchObservedRunningTime="2026-04-16 02:16:14.479391059 +0000 UTC m=+139.771279612" Apr 16 02:16:14.599421 kubelet[2905]: E0416 02:16:14.599369 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:15.223889 kubelet[2905]: E0416 02:16:15.223437 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:17.308969 kubelet[2905]: E0416 02:16:17.305363 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.192s" Apr 16 02:16:19.520258 kubelet[2905]: E0416 02:16:19.519321 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:19.970964 kubelet[2905]: E0416 02:16:19.965417 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.869s" Apr 16 02:16:24.629691 kubelet[2905]: E0416 02:16:24.626476 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:24.896990 kubelet[2905]: I0416 02:16:24.833466 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fqp9w" podStartSLOduration=35.852655481 podStartE2EDuration="57.833054711s" podCreationTimestamp="2026-04-16 02:15:27 +0000 UTC" firstStartedPulling="2026-04-16 02:15:40.68401373 +0000 UTC m=+105.975902281" lastFinishedPulling="2026-04-16 02:16:02.664412957 +0000 UTC m=+127.956301511" observedRunningTime="2026-04-16 02:16:21.638354522 +0000 UTC m=+146.930243091" watchObservedRunningTime="2026-04-16 02:16:24.833054711 +0000 UTC m=+150.124943289" Apr 16 02:16:28.388541 update_engine[1561]: I20260416 02:16:28.386520 1561 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 02:16:28.388541 update_engine[1561]: I20260416 02:16:28.386615 1561 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 02:16:28.388541 update_engine[1561]: I20260416 02:16:28.386953 1561 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.390921 1561 omaha_request_params.cc:62] Current group set to stable Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391072 1561 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391078 1561 update_attempter.cc:643] Scheduling an action processor start. Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391100 1561 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391256 1561 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391366 1561 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391372 1561 omaha_request_action.cc:272] Request: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: Apr 16 02:16:28.393452 update_engine[1561]: I20260416 02:16:28.391379 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:16:28.401929 update_engine[1561]: I20260416 02:16:28.400740 1561 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:16:28.402011 update_engine[1561]: I20260416 02:16:28.401952 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:16:28.416896 update_engine[1561]: E20260416 02:16:28.415122 1561 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:16:28.416896 update_engine[1561]: I20260416 02:16:28.415742 1561 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 02:16:28.487104 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 02:16:29.186619 kubelet[2905]: E0416 02:16:29.186318 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 16 02:16:29.783647 kubelet[2905]: E0416 02:16:29.781392 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:31.721996 kubelet[2905]: E0416 02:16:31.721277 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.631s" Apr 16 02:16:34.929945 kubelet[2905]: E0416 02:16:34.929806 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:38.411521 update_engine[1561]: I20260416 02:16:38.406097 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:16:38.411521 update_engine[1561]: I20260416 02:16:38.406337 1561 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:16:38.411521 update_engine[1561]: I20260416 02:16:38.411423 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:16:38.434452 update_engine[1561]: E20260416 02:16:38.434369 1561 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:16:38.435404 update_engine[1561]: I20260416 02:16:38.435360 1561 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 02:16:40.130856 kubelet[2905]: E0416 02:16:40.127000 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:45.387885 kubelet[2905]: E0416 02:16:45.377528 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:46.345717 kubelet[2905]: E0416 02:16:46.340974 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:16:48.373090 update_engine[1561]: I20260416 02:16:48.373014 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:16:48.378938 update_engine[1561]: I20260416 02:16:48.374428 1561 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:16:48.378938 update_engine[1561]: I20260416 02:16:48.375142 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:16:48.395049 update_engine[1561]: E20260416 02:16:48.393954 1561 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:16:48.395049 update_engine[1561]: I20260416 02:16:48.394186 1561 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 02:16:49.510245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354568511.mount: Deactivated successfully. Apr 16 02:16:50.438750 kubelet[2905]: E0416 02:16:50.438629 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:55.963063 kubelet[2905]: E0416 02:16:55.962165 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:16:56.355650 kubelet[2905]: E0416 02:16:56.353374 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.25s" Apr 16 02:16:57.936718 kubelet[2905]: E0416 02:16:57.935196 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.579s" Apr 16 02:16:58.374655 update_engine[1561]: I20260416 02:16:58.373620 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:16:58.374655 update_engine[1561]: I20260416 02:16:58.373895 1561 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:16:58.379675 update_engine[1561]: I20260416 02:16:58.378190 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:16:58.388853 update_engine[1561]: E20260416 02:16:58.388707 1561 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:16:58.388853 update_engine[1561]: I20260416 02:16:58.388866 1561 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:16:58.388853 update_engine[1561]: I20260416 02:16:58.388878 1561 omaha_request_action.cc:617] Omaha request response: Apr 16 02:16:58.389319 update_engine[1561]: E20260416 02:16:58.389117 1561 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 02:16:58.389319 update_engine[1561]: I20260416 02:16:58.389163 1561 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 02:16:58.391612 update_engine[1561]: I20260416 02:16:58.389169 1561 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:16:58.391612 update_engine[1561]: I20260416 02:16:58.390690 1561 update_attempter.cc:306] Processing Done. Apr 16 02:16:58.391612 update_engine[1561]: E20260416 02:16:58.390731 1561 update_attempter.cc:619] Update failed. Apr 16 02:16:58.391612 update_engine[1561]: I20260416 02:16:58.390740 1561 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 02:16:58.391612 update_engine[1561]: I20260416 02:16:58.390745 1561 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 02:16:58.391612 update_engine[1561]: I20260416 02:16:58.390772 1561 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 02:16:58.394782 update_engine[1561]: I20260416 02:16:58.394706 1561 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:16:58.396633 update_engine[1561]: I20260416 02:16:58.395360 1561 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:16:58.396633 update_engine[1561]: I20260416 02:16:58.395393 1561 omaha_request_action.cc:272] Request: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: Apr 16 02:16:58.396633 update_engine[1561]: I20260416 02:16:58.395401 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:16:58.396633 update_engine[1561]: I20260416 02:16:58.395598 1561 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:16:58.396633 update_engine[1561]: I20260416 02:16:58.396060 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:16:58.403644 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 02:16:58.405638 update_engine[1561]: E20260416 02:16:58.404783 1561 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405736 1561 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405758 1561 omaha_request_action.cc:617] Omaha request response: Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405768 1561 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405773 1561 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405777 1561 update_attempter.cc:306] Processing Done. Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405784 1561 update_attempter.cc:310] Error event sent. Apr 16 02:16:58.405911 update_engine[1561]: I20260416 02:16:58.405817 1561 update_check_scheduler.cc:74] Next update check in 44m32s Apr 16 02:16:58.407733 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 02:17:01.098410 kubelet[2905]: E0416 02:17:01.042515 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:05.293252 kubelet[2905]: E0416 02:17:05.289531 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.186s" Apr 16 02:17:06.268896 kubelet[2905]: E0416 02:17:06.268344 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:11.432760 kubelet[2905]: E0416 02:17:11.423456 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:14.257598 systemd[1]: cri-containerd-b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d.scope: Deactivated successfully. Apr 16 02:17:14.261529 systemd[1]: cri-containerd-b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d.scope: Consumed 13.816s CPU time, 37.2M memory peak, 2.3M read from disk. Apr 16 02:17:14.362894 containerd[1572]: time="2026-04-16T02:17:14.350445719Z" level=info msg="received container exit event container_id:\"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\" id:\"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\" pid:3224 exit_status:1 exited_at:{seconds:1776305834 nanos:328342851}" Apr 16 02:17:14.845005 containerd[1572]: time="2026-04-16T02:17:14.844919057Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:17:14.862737 containerd[1572]: time="2026-04-16T02:17:14.862170520Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 16 02:17:14.874906 containerd[1572]: time="2026-04-16T02:17:14.874699733Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:17:14.891282 containerd[1572]: time="2026-04-16T02:17:14.891036200Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 1m12.064529375s" Apr 16 02:17:14.891282 containerd[1572]: time="2026-04-16T02:17:14.891116168Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 16 02:17:15.201441 containerd[1572]: time="2026-04-16T02:17:15.200011487Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 02:17:15.218722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d-rootfs.mount: Deactivated successfully. Apr 16 02:17:15.416487 containerd[1572]: time="2026-04-16T02:17:15.416423829Z" level=info msg="Container 8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:15.520049 containerd[1572]: time="2026-04-16T02:17:15.519668952Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\"" Apr 16 02:17:15.569153 containerd[1572]: time="2026-04-16T02:17:15.569078482Z" level=info msg="StartContainer for \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\"" Apr 16 02:17:15.584437 containerd[1572]: time="2026-04-16T02:17:15.584371175Z" level=info msg="connecting to shim 8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" protocol=ttrpc version=3 Apr 16 02:17:16.124508 systemd[1]: Started cri-containerd-8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17.scope - libcontainer container 8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17. Apr 16 02:17:16.601905 kubelet[2905]: E0416 02:17:16.601699 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:16.662588 kubelet[2905]: I0416 02:17:16.662165 2905 scope.go:117] "RemoveContainer" containerID="b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d" Apr 16 02:17:16.669843 kubelet[2905]: E0416 02:17:16.667999 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:16.695222 kubelet[2905]: E0416 02:17:16.687506 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:17:17.500115 containerd[1572]: time="2026-04-16T02:17:17.498513992Z" level=info msg="StartContainer for \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" returns successfully" Apr 16 02:17:17.695169 containerd[1572]: time="2026-04-16T02:17:17.695115779Z" level=info msg="received container exit event container_id:\"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" id:\"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" pid:3514 exited_at:{seconds:1776305837 nanos:692518847}" Apr 16 02:17:17.708864 systemd[1]: cri-containerd-8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17.scope: Deactivated successfully. Apr 16 02:17:17.713359 systemd[1]: cri-containerd-8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17.scope: Consumed 405ms CPU time, 6.8M memory peak, 4K read from disk, 2.1M written to disk. Apr 16 02:17:18.213638 kubelet[2905]: E0416 02:17:18.213228 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:18.574179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17-rootfs.mount: Deactivated successfully. Apr 16 02:17:19.470660 kubelet[2905]: E0416 02:17:19.467533 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:19.660431 containerd[1572]: time="2026-04-16T02:17:19.660175329Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 02:17:19.798229 containerd[1572]: time="2026-04-16T02:17:19.795309744Z" level=info msg="Container 1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:19.824322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693205694.mount: Deactivated successfully. Apr 16 02:17:19.840142 containerd[1572]: time="2026-04-16T02:17:19.839884270Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\"" Apr 16 02:17:19.876209 containerd[1572]: time="2026-04-16T02:17:19.867509707Z" level=info msg="StartContainer for \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\"" Apr 16 02:17:19.885066 containerd[1572]: time="2026-04-16T02:17:19.876441428Z" level=info msg="connecting to shim 1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" protocol=ttrpc version=3 Apr 16 02:17:20.129690 systemd[1]: Started cri-containerd-1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563.scope - libcontainer container 1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563. Apr 16 02:17:21.176868 containerd[1572]: time="2026-04-16T02:17:21.175486385Z" level=info msg="StartContainer for \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" returns successfully" Apr 16 02:17:21.663460 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:17:21.666393 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:17:21.667741 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:17:21.697862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:17:21.742927 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 02:17:21.770697 systemd[1]: cri-containerd-1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563.scope: Deactivated successfully. Apr 16 02:17:21.820359 containerd[1572]: time="2026-04-16T02:17:21.819948892Z" level=info msg="received container exit event container_id:\"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" id:\"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" pid:3560 exited_at:{seconds:1776305841 nanos:807012939}" Apr 16 02:17:21.955245 kubelet[2905]: E0416 02:17:21.678523 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:22.131122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:17:22.808697 kubelet[2905]: E0416 02:17:22.806783 2905 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice/cri-containerd-1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563.scope\": RecentStats: unable to find data in memory cache]" Apr 16 02:17:22.943470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563-rootfs.mount: Deactivated successfully. Apr 16 02:17:23.145407 kubelet[2905]: E0416 02:17:23.144439 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:23.236343 kubelet[2905]: E0416 02:17:23.236241 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.132s" Apr 16 02:17:23.242694 containerd[1572]: time="2026-04-16T02:17:23.238592781Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 02:17:23.406974 containerd[1572]: time="2026-04-16T02:17:23.404482032Z" level=info msg="Container b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:23.529247 containerd[1572]: time="2026-04-16T02:17:23.529080541Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\"" Apr 16 02:17:23.614612 containerd[1572]: time="2026-04-16T02:17:23.613253342Z" level=info msg="StartContainer for \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\"" Apr 16 02:17:23.630938 containerd[1572]: time="2026-04-16T02:17:23.630849414Z" level=info msg="connecting to shim b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" protocol=ttrpc version=3 Apr 16 02:17:24.140252 systemd[1]: Started cri-containerd-b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a.scope - libcontainer container b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a. Apr 16 02:17:24.429923 kubelet[2905]: I0416 02:17:24.427290 2905 scope.go:117] "RemoveContainer" containerID="b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d" Apr 16 02:17:24.429923 kubelet[2905]: E0416 02:17:24.429152 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:24.554832 containerd[1572]: time="2026-04-16T02:17:24.554714461Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 16 02:17:24.696573 containerd[1572]: time="2026-04-16T02:17:24.696207033Z" level=info msg="Container bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:24.861434 containerd[1572]: time="2026-04-16T02:17:24.861221984Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\"" Apr 16 02:17:24.937872 containerd[1572]: time="2026-04-16T02:17:24.937685755Z" level=info msg="StartContainer for \"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\"" Apr 16 02:17:25.006826 containerd[1572]: time="2026-04-16T02:17:25.005930216Z" level=info msg="connecting to shim bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:17:25.415748 systemd[1]: Started cri-containerd-bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90.scope - libcontainer container bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90. Apr 16 02:17:26.589284 containerd[1572]: time="2026-04-16T02:17:26.589132785Z" level=info msg="StartContainer for \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" returns successfully" Apr 16 02:17:26.672967 systemd[1]: cri-containerd-b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a.scope: Deactivated successfully. Apr 16 02:17:26.813907 containerd[1572]: time="2026-04-16T02:17:26.813779115Z" level=info msg="received container exit event container_id:\"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" id:\"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" pid:3609 exited_at:{seconds:1776305846 nanos:809299010}" Apr 16 02:17:26.973149 containerd[1572]: time="2026-04-16T02:17:26.967400493Z" level=info msg="StartContainer for \"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\" returns successfully" Apr 16 02:17:27.043060 kubelet[2905]: E0416 02:17:27.042383 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:28.027377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a-rootfs.mount: Deactivated successfully. Apr 16 02:17:28.093744 kubelet[2905]: E0416 02:17:28.091012 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:28.128515 kubelet[2905]: E0416 02:17:28.128307 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:28.416579 kubelet[2905]: E0416 02:17:28.407218 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:29.130030 kubelet[2905]: E0416 02:17:29.127235 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:29.212128 kubelet[2905]: E0416 02:17:29.209996 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:29.838121 kubelet[2905]: E0416 02:17:29.837767 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:30.014845 containerd[1572]: time="2026-04-16T02:17:30.013917467Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 02:17:30.125622 containerd[1572]: time="2026-04-16T02:17:30.124965331Z" level=info msg="Container 768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:30.297204 containerd[1572]: time="2026-04-16T02:17:30.297067007Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\"" Apr 16 02:17:30.341781 containerd[1572]: time="2026-04-16T02:17:30.339199734Z" level=info msg="StartContainer for \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\"" Apr 16 02:17:30.364719 containerd[1572]: time="2026-04-16T02:17:30.363637950Z" level=info msg="connecting to shim 768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" protocol=ttrpc version=3 Apr 16 02:17:30.681501 systemd[1]: Started cri-containerd-768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7.scope - libcontainer container 768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7. Apr 16 02:17:31.497505 systemd[1]: cri-containerd-768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7.scope: Deactivated successfully. Apr 16 02:17:31.533037 containerd[1572]: time="2026-04-16T02:17:31.520179853Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice/cri-containerd-768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7.scope/memory.events\": no such file or directory" Apr 16 02:17:31.599702 containerd[1572]: time="2026-04-16T02:17:31.599275611Z" level=info msg="received container exit event container_id:\"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" id:\"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" pid:3686 exited_at:{seconds:1776305851 nanos:520930650}" Apr 16 02:17:31.634407 containerd[1572]: time="2026-04-16T02:17:31.633994048Z" level=info msg="StartContainer for \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" returns successfully" Apr 16 02:17:32.173119 kubelet[2905]: E0416 02:17:32.167000 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:17:32.503462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7-rootfs.mount: Deactivated successfully. Apr 16 02:17:33.288891 kubelet[2905]: E0416 02:17:33.286117 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:33.341884 containerd[1572]: time="2026-04-16T02:17:33.338589067Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 02:17:33.471246 containerd[1572]: time="2026-04-16T02:17:33.471157185Z" level=info msg="Container 3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:17:33.507915 containerd[1572]: time="2026-04-16T02:17:33.505323292Z" level=info msg="CreateContainer within sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\"" Apr 16 02:17:33.600184 containerd[1572]: time="2026-04-16T02:17:33.596018100Z" level=info msg="StartContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\"" Apr 16 02:17:33.630272 containerd[1572]: time="2026-04-16T02:17:33.627282308Z" level=info msg="connecting to shim 3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2" address="unix:///run/containerd/s/d949ffc9f32e946458adcc605e0373b54fdcb41e5a85eb2f45e2ff79287adf00" protocol=ttrpc version=3 Apr 16 02:17:33.854626 kubelet[2905]: E0416 02:17:33.854229 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:34.135506 systemd[1]: Started cri-containerd-3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2.scope - libcontainer container 3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2. Apr 16 02:17:34.840576 containerd[1572]: time="2026-04-16T02:17:34.837534299Z" level=info msg="StartContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" returns successfully" Apr 16 02:17:36.844343 kubelet[2905]: E0416 02:17:36.844042 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:38.124316 kubelet[2905]: E0416 02:17:38.123926 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:43.839527 kubelet[2905]: E0416 02:17:43.839326 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.725s" Apr 16 02:17:44.078900 kubelet[2905]: E0416 02:17:44.074267 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:44.088094 kubelet[2905]: E0416 02:17:44.085470 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:45.450449 kubelet[2905]: I0416 02:17:45.449196 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qdfxn" podStartSLOduration=50.777422375 podStartE2EDuration="2m18.449170152s" podCreationTimestamp="2026-04-16 02:15:27 +0000 UTC" firstStartedPulling="2026-04-16 02:15:47.371828762 +0000 UTC m=+112.663717312" lastFinishedPulling="2026-04-16 02:17:15.043576527 +0000 UTC m=+200.335465089" observedRunningTime="2026-04-16 02:17:45.4491262 +0000 UTC m=+230.741014763" watchObservedRunningTime="2026-04-16 02:17:45.449170152 +0000 UTC m=+230.741058706" Apr 16 02:17:52.098771 systemd-networkd[1492]: cilium_host: Link UP Apr 16 02:17:52.106395 systemd-networkd[1492]: cilium_net: Link UP Apr 16 02:17:52.106893 systemd-networkd[1492]: cilium_net: Gained carrier Apr 16 02:17:52.110992 systemd-networkd[1492]: cilium_host: Gained carrier Apr 16 02:17:52.428027 systemd-networkd[1492]: cilium_host: Gained IPv6LL Apr 16 02:17:52.754325 systemd-networkd[1492]: cilium_net: Gained IPv6LL Apr 16 02:17:53.041431 systemd-networkd[1492]: cilium_vxlan: Link UP Apr 16 02:17:53.041440 systemd-networkd[1492]: cilium_vxlan: Gained carrier Apr 16 02:17:53.991407 kernel: NET: Registered PF_ALG protocol family Apr 16 02:17:54.293929 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Apr 16 02:17:55.829394 systemd[1]: Created slice kubepods-burstable-pod0c2461c3_3ae2_4eb9_b9b7_1329659c8f8b.slice - libcontainer container kubepods-burstable-pod0c2461c3_3ae2_4eb9_b9b7_1329659c8f8b.slice. Apr 16 02:17:55.838066 kubelet[2905]: I0416 02:17:55.836877 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdfd6\" (UniqueName: \"kubernetes.io/projected/0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b-kube-api-access-xdfd6\") pod \"coredns-66bc5c9577-8cn7r\" (UID: \"0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b\") " pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:17:55.839831 kubelet[2905]: I0416 02:17:55.839760 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b-config-volume\") pod \"coredns-66bc5c9577-8cn7r\" (UID: \"0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b\") " pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:17:56.053183 kubelet[2905]: I0416 02:17:56.052047 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845vl\" (UniqueName: \"kubernetes.io/projected/224b1be2-a057-4a43-9d23-a0957387a459-kube-api-access-845vl\") pod \"coredns-66bc5c9577-wmdr2\" (UID: \"224b1be2-a057-4a43-9d23-a0957387a459\") " pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:17:56.053183 kubelet[2905]: I0416 02:17:56.052435 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224b1be2-a057-4a43-9d23-a0957387a459-config-volume\") pod \"coredns-66bc5c9577-wmdr2\" (UID: \"224b1be2-a057-4a43-9d23-a0957387a459\") " pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:17:56.186639 systemd[1]: Created slice kubepods-burstable-pod224b1be2_a057_4a43_9d23_a0957387a459.slice - libcontainer container kubepods-burstable-pod224b1be2_a057_4a43_9d23_a0957387a459.slice. Apr 16 02:17:56.834016 kubelet[2905]: E0416 02:17:56.832211 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:56.845685 containerd[1572]: time="2026-04-16T02:17:56.842519011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,}" Apr 16 02:17:56.850881 kubelet[2905]: E0416 02:17:56.846964 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:17:56.873014 containerd[1572]: time="2026-04-16T02:17:56.855079406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,}" Apr 16 02:17:57.673538 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:42062.service - OpenSSH per-connection server daemon (10.0.0.1:42062). Apr 16 02:17:57.969044 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 42062 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:17:57.989243 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:17:58.054943 systemd-logind[1559]: New session 10 of user core. Apr 16 02:17:58.089691 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 02:17:59.598938 sshd[4078]: Connection closed by 10.0.0.1 port 42062 Apr 16 02:17:59.605148 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Apr 16 02:17:59.639433 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:42062.service: Deactivated successfully. Apr 16 02:17:59.682673 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 02:17:59.705788 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Apr 16 02:17:59.716394 systemd-logind[1559]: Removed session 10. Apr 16 02:18:01.989874 systemd-networkd[1492]: lxc_health: Link UP Apr 16 02:18:02.037125 systemd-networkd[1492]: lxc_health: Gained carrier Apr 16 02:18:02.167425 kubelet[2905]: E0416 02:18:02.165873 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:02.509350 kubelet[2905]: E0416 02:18:02.508795 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:03.892405 systemd-networkd[1492]: lxc_health: Gained IPv6LL Apr 16 02:18:04.784315 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:42072.service - OpenSSH per-connection server daemon (10.0.0.1:42072). Apr 16 02:18:05.284396 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 42072 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:05.296362 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:05.473357 systemd-logind[1559]: New session 11 of user core. Apr 16 02:18:05.548444 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 02:18:06.290853 systemd-networkd[1492]: lxced993fe33330: Link UP Apr 16 02:18:06.298208 kernel: eth0: renamed from tmpa0784 Apr 16 02:18:06.312801 systemd-networkd[1492]: lxced993fe33330: Gained carrier Apr 16 02:18:06.956689 systemd-networkd[1492]: lxc748457d44159: Link UP Apr 16 02:18:07.004642 kernel: eth0: renamed from tmp85c56 Apr 16 02:18:07.018274 systemd-networkd[1492]: lxc748457d44159: Gained carrier Apr 16 02:18:07.519722 sshd[4248]: Connection closed by 10.0.0.1 port 42072 Apr 16 02:18:07.513964 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:07.561272 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:42072.service: Deactivated successfully. Apr 16 02:18:07.579977 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 02:18:07.594850 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Apr 16 02:18:07.617829 systemd-logind[1559]: Removed session 11. Apr 16 02:18:08.240643 systemd-networkd[1492]: lxced993fe33330: Gained IPv6LL Apr 16 02:18:08.916597 systemd-networkd[1492]: lxc748457d44159: Gained IPv6LL Apr 16 02:18:09.113331 kubelet[2905]: E0416 02:18:09.111988 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:11.583990 containerd[1572]: time="2026-04-16T02:18:11.459102799Z" level=warning msg="container event discarded" container=42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228 type=CONTAINER_STOPPED_EVENT Apr 16 02:18:12.502440 containerd[1572]: time="2026-04-16T02:18:12.501965557Z" level=warning msg="container event discarded" container=6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129 type=CONTAINER_CREATED_EVENT Apr 16 02:18:12.979752 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:52040.service - OpenSSH per-connection server daemon (10.0.0.1:52040). Apr 16 02:18:13.535775 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 52040 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:13.539868 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:13.596349 systemd-logind[1559]: New session 12 of user core. Apr 16 02:18:13.662881 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 02:18:14.140460 containerd[1572]: time="2026-04-16T02:18:14.132725742Z" level=warning msg="container event discarded" container=6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129 type=CONTAINER_STARTED_EVENT Apr 16 02:18:14.938822 sshd[4293]: Connection closed by 10.0.0.1 port 52040 Apr 16 02:18:14.946981 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:15.069433 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:52040.service: Deactivated successfully. Apr 16 02:18:15.236175 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 02:18:15.286900 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Apr 16 02:18:15.302195 systemd-logind[1559]: Removed session 12. Apr 16 02:18:20.199770 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:48210.service - OpenSSH per-connection server daemon (10.0.0.1:48210). Apr 16 02:18:21.118480 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 48210 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:21.126950 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:21.210388 systemd-logind[1559]: New session 13 of user core. Apr 16 02:18:21.230772 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 02:18:23.322923 sshd[4314]: Connection closed by 10.0.0.1 port 48210 Apr 16 02:18:23.341117 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:23.434125 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:48210.service: Deactivated successfully. Apr 16 02:18:23.491104 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 02:18:23.513404 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Apr 16 02:18:23.545109 systemd-logind[1559]: Removed session 13. Apr 16 02:18:28.535910 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). Apr 16 02:18:29.100912 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:29.112415 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:29.334390 systemd-logind[1559]: New session 14 of user core. Apr 16 02:18:29.417539 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 02:18:30.772708 sshd[4336]: Connection closed by 10.0.0.1 port 58018 Apr 16 02:18:30.780256 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:30.826724 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:58018.service: Deactivated successfully. Apr 16 02:18:30.953483 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 02:18:30.967524 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Apr 16 02:18:30.984512 systemd-logind[1559]: Removed session 14. Apr 16 02:18:33.126968 kubelet[2905]: E0416 02:18:33.124095 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:35.824678 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:56048.service - OpenSSH per-connection server daemon (10.0.0.1:56048). Apr 16 02:18:36.252921 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 56048 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:36.264021 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:36.425210 systemd-logind[1559]: New session 15 of user core. Apr 16 02:18:36.592495 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 02:18:37.658489 sshd[4353]: Connection closed by 10.0.0.1 port 56048 Apr 16 02:18:37.662614 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:37.680004 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:56048.service: Deactivated successfully. Apr 16 02:18:37.695396 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 02:18:37.706473 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Apr 16 02:18:37.722125 systemd-logind[1559]: Removed session 15. Apr 16 02:18:42.913213 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:56062.service - OpenSSH per-connection server daemon (10.0.0.1:56062). Apr 16 02:18:43.787385 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 56062 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:43.790640 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:43.965989 systemd-logind[1559]: New session 16 of user core. Apr 16 02:18:44.005717 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 02:18:45.240394 sshd[4372]: Connection closed by 10.0.0.1 port 56062 Apr 16 02:18:45.249137 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:45.292111 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:56062.service: Deactivated successfully. Apr 16 02:18:45.325484 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 02:18:45.343781 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Apr 16 02:18:45.375006 systemd-logind[1559]: Removed session 16. Apr 16 02:18:46.151896 kubelet[2905]: E0416 02:18:46.150518 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:47.091103 kubelet[2905]: E0416 02:18:47.089841 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:50.420025 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:37102.service - OpenSSH per-connection server daemon (10.0.0.1:37102). Apr 16 02:18:50.473178 kubelet[2905]: E0416 02:18:50.471367 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:18:51.016698 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 37102 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:51.047213 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:51.139134 systemd-logind[1559]: New session 17 of user core. Apr 16 02:18:51.182130 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 02:18:53.094719 sshd[4390]: Connection closed by 10.0.0.1 port 37102 Apr 16 02:18:53.098634 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Apr 16 02:18:53.141341 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:37102.service: Deactivated successfully. Apr 16 02:18:53.222249 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 02:18:53.245063 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Apr 16 02:18:53.252510 systemd-logind[1559]: Removed session 17. Apr 16 02:18:58.192043 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:37410.service - OpenSSH per-connection server daemon (10.0.0.1:37410). Apr 16 02:18:58.724938 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:18:58.742466 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:18:58.808502 systemd-logind[1559]: New session 18 of user core. Apr 16 02:18:58.921085 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 02:19:00.207017 sshd[4409]: Connection closed by 10.0.0.1 port 37410 Apr 16 02:19:00.215428 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:00.229445 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:37410.service: Deactivated successfully. Apr 16 02:19:00.255166 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 02:19:00.262104 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Apr 16 02:19:00.268058 systemd-logind[1559]: Removed session 18. Apr 16 02:19:05.232176 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:37424.service - OpenSSH per-connection server daemon (10.0.0.1:37424). Apr 16 02:19:05.821733 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 37424 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:05.829471 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:05.964959 systemd-logind[1559]: New session 19 of user core. Apr 16 02:19:05.998414 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 02:19:07.150277 sshd[4427]: Connection closed by 10.0.0.1 port 37424 Apr 16 02:19:07.154832 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:07.181270 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:37424.service: Deactivated successfully. Apr 16 02:19:07.192478 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 02:19:07.225422 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Apr 16 02:19:07.316041 systemd-logind[1559]: Removed session 19. Apr 16 02:19:12.212106 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:39804.service - OpenSSH per-connection server daemon (10.0.0.1:39804). Apr 16 02:19:12.593491 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 39804 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:12.598153 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:12.674392 systemd-logind[1559]: New session 20 of user core. Apr 16 02:19:12.710416 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 02:19:14.807700 sshd[4450]: Connection closed by 10.0.0.1 port 39804 Apr 16 02:19:14.807346 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:14.848740 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Apr 16 02:19:14.859228 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:39804.service: Deactivated successfully. Apr 16 02:19:14.990774 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 02:19:15.037789 systemd-logind[1559]: Removed session 20. Apr 16 02:19:20.106273 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). Apr 16 02:19:20.687361 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:20.790849 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:20.921488 systemd-logind[1559]: New session 21 of user core. Apr 16 02:19:21.016829 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 02:19:23.956431 sshd[4467]: Connection closed by 10.0.0.1 port 40090 Apr 16 02:19:23.961172 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:23.990326 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:40090.service: Deactivated successfully. Apr 16 02:19:24.024390 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 02:19:24.029760 systemd[1]: session-21.scope: Consumed 1.814s CPU time, 15.3M memory peak. Apr 16 02:19:24.124769 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Apr 16 02:19:24.142713 systemd-logind[1559]: Removed session 21. Apr 16 02:19:25.115698 kubelet[2905]: E0416 02:19:25.115100 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:19:28.162637 kubelet[2905]: E0416 02:19:28.160135 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:19:29.175604 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Apr 16 02:19:30.010825 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:30.038532 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:30.292285 systemd-logind[1559]: New session 22 of user core. Apr 16 02:19:30.362158 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 02:19:33.415007 sshd[4485]: Connection closed by 10.0.0.1 port 58560 Apr 16 02:19:33.417475 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:33.498225 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:58560.service: Deactivated successfully. Apr 16 02:19:33.617536 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 02:19:33.625003 systemd[1]: session-22.scope: Consumed 1.526s CPU time, 17.1M memory peak. Apr 16 02:19:33.684095 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Apr 16 02:19:33.705445 systemd-logind[1559]: Removed session 22. Apr 16 02:19:36.099218 kubelet[2905]: E0416 02:19:36.098352 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:19:36.426478 systemd-networkd[1492]: lxced993fe33330: Link DOWN Apr 16 02:19:36.426487 systemd-networkd[1492]: lxced993fe33330: Lost carrier Apr 16 02:19:37.199097 systemd-networkd[1492]: lxc748457d44159: Link DOWN Apr 16 02:19:37.199110 systemd-networkd[1492]: lxc748457d44159: Lost carrier Apr 16 02:19:37.874962 containerd[1572]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:19:37.933933 containerd[1572]: time="2026-04-16T02:19:37.933080122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0784d3d28cdbdc5a3d327c6a7bc29a286b5b6e87ca5edf6403d254e1883cc67\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:19:38.001531 systemd[1]: run-netns-cni\x2d551440b7\x2de6d6\x2da4cf\x2d52aa\x2d8dea9685555c.mount: Deactivated successfully. Apr 16 02:19:38.017091 kubelet[2905]: E0416 02:19:37.997826 2905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0784d3d28cdbdc5a3d327c6a7bc29a286b5b6e87ca5edf6403d254e1883cc67\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:19:38.075339 kubelet[2905]: E0416 02:19:38.058656 2905 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0784d3d28cdbdc5a3d327c6a7bc29a286b5b6e87ca5edf6403d254e1883cc67\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:19:38.075339 kubelet[2905]: E0416 02:19:38.058762 2905 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0784d3d28cdbdc5a3d327c6a7bc29a286b5b6e87ca5edf6403d254e1883cc67\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:19:38.075339 kubelet[2905]: E0416 02:19:38.058894 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8cn7r_kube-system(0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8cn7r_kube-system(0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0784d3d28cdbdc5a3d327c6a7bc29a286b5b6e87ca5edf6403d254e1883cc67\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-66bc5c9577-8cn7r" podUID="0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b" Apr 16 02:19:38.526295 containerd[1572]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:19:38.554807 containerd[1572]: time="2026-04-16T02:19:38.546260066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c56389e4aaf951339fa4734b8f39b707b219f294c0b2023b55923d7961db51\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:19:38.570015 kubelet[2905]: E0416 02:19:38.569104 2905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c56389e4aaf951339fa4734b8f39b707b219f294c0b2023b55923d7961db51\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:19:38.570015 kubelet[2905]: E0416 02:19:38.569407 2905 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c56389e4aaf951339fa4734b8f39b707b219f294c0b2023b55923d7961db51\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:19:38.570015 kubelet[2905]: E0416 02:19:38.569429 2905 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c56389e4aaf951339fa4734b8f39b707b219f294c0b2023b55923d7961db51\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:19:38.573402 kubelet[2905]: E0416 02:19:38.570076 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wmdr2_kube-system(224b1be2-a057-4a43-9d23-a0957387a459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wmdr2_kube-system(224b1be2-a057-4a43-9d23-a0957387a459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85c56389e4aaf951339fa4734b8f39b707b219f294c0b2023b55923d7961db51\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-66bc5c9577-wmdr2" podUID="224b1be2-a057-4a43-9d23-a0957387a459" Apr 16 02:19:38.638078 systemd[1]: run-netns-cni\x2ddb953a3d\x2d6391\x2d8d36\x2d6ee1\x2da46244e9d638.mount: Deactivated successfully. Apr 16 02:19:38.767480 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:41396.service - OpenSSH per-connection server daemon (10.0.0.1:41396). Apr 16 02:19:39.416326 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 41396 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:39.428041 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:39.684224 systemd-logind[1559]: New session 23 of user core. Apr 16 02:19:39.716245 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 02:19:43.407250 sshd[4533]: Connection closed by 10.0.0.1 port 41396 Apr 16 02:19:43.413521 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:43.484158 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:41396.service: Deactivated successfully. Apr 16 02:19:43.548202 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 02:19:43.549351 systemd[1]: session-23.scope: Consumed 2.063s CPU time, 15.5M memory peak. Apr 16 02:19:43.577399 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Apr 16 02:19:43.585902 systemd-logind[1559]: Removed session 23. Apr 16 02:19:48.651490 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:57890.service - OpenSSH per-connection server daemon (10.0.0.1:57890). Apr 16 02:19:49.153716 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 57890 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:19:49.169337 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:19:49.441145 systemd-logind[1559]: New session 24 of user core. Apr 16 02:19:49.491974 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 02:19:50.201819 kubelet[2905]: E0416 02:19:50.197491 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:19:50.235934 containerd[1572]: time="2026-04-16T02:19:50.235479492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,}" Apr 16 02:19:50.280878 kubelet[2905]: E0416 02:19:50.233296 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:19:50.364204 containerd[1572]: time="2026-04-16T02:19:50.362820551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,}" Apr 16 02:19:53.152162 systemd-networkd[1492]: lxc45b69f38eba1: Link UP Apr 16 02:19:53.303373 kernel: eth0: renamed from tmp358f9 Apr 16 02:19:53.424113 systemd-networkd[1492]: lxc45b69f38eba1: Gained carrier Apr 16 02:19:53.520483 systemd-networkd[1492]: lxcd4a81b24dd5f: Link UP Apr 16 02:19:53.660953 kernel: eth0: renamed from tmp7a990 Apr 16 02:19:53.712114 systemd-networkd[1492]: lxcd4a81b24dd5f: Gained carrier Apr 16 02:19:54.935629 systemd-networkd[1492]: lxc45b69f38eba1: Gained IPv6LL Apr 16 02:19:55.278519 kubelet[2905]: E0416 02:19:55.277942 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.095s" Apr 16 02:19:55.443359 systemd-networkd[1492]: lxcd4a81b24dd5f: Gained IPv6LL Apr 16 02:19:59.244470 sshd[4555]: Connection closed by 10.0.0.1 port 57890 Apr 16 02:19:59.315134 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Apr 16 02:19:59.349746 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Apr 16 02:19:59.385079 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:57890.service: Deactivated successfully. Apr 16 02:19:59.594148 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 02:19:59.599146 systemd[1]: session-24.scope: Consumed 4.024s CPU time, 16.1M memory peak. Apr 16 02:19:59.624834 systemd-logind[1559]: Removed session 24. Apr 16 02:20:00.078486 systemd[1]: cri-containerd-bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90.scope: Deactivated successfully. Apr 16 02:20:00.093249 systemd[1]: cri-containerd-bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90.scope: Consumed 19.859s CPU time, 59.2M memory peak, 5.3M read from disk. Apr 16 02:20:00.101905 containerd[1572]: time="2026-04-16T02:20:00.101473868Z" level=info msg="received container exit event container_id:\"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\" id:\"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\" pid:3630 exit_status:1 exited_at:{seconds:1776306000 nanos:96384225}" Apr 16 02:20:01.016187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90-rootfs.mount: Deactivated successfully. Apr 16 02:20:01.124946 kubelet[2905]: E0416 02:20:01.119445 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:01.716153 kubelet[2905]: I0416 02:20:01.706245 2905 scope.go:117] "RemoveContainer" containerID="b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d" Apr 16 02:20:01.805064 kubelet[2905]: I0416 02:20:01.804615 2905 scope.go:117] "RemoveContainer" containerID="bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" Apr 16 02:20:01.811066 kubelet[2905]: E0416 02:20:01.810383 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:01.813341 kubelet[2905]: E0416 02:20:01.811440 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:20:01.954778 containerd[1572]: time="2026-04-16T02:20:01.952971331Z" level=info msg="RemoveContainer for \"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\"" Apr 16 02:20:02.015563 containerd[1572]: time="2026-04-16T02:20:02.015383666Z" level=info msg="RemoveContainer for \"b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d\" returns successfully" Apr 16 02:20:03.316285 systemd[1]: cri-containerd-5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4.scope: Deactivated successfully. Apr 16 02:20:03.341100 systemd[1]: cri-containerd-5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4.scope: Consumed 14.264s CPU time, 23.4M memory peak, 200K read from disk. Apr 16 02:20:03.382417 containerd[1572]: time="2026-04-16T02:20:03.329438103Z" level=info msg="received container exit event container_id:\"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\" id:\"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\" pid:3260 exit_status:1 exited_at:{seconds:1776306003 nanos:324455311}" Apr 16 02:20:04.221390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4-rootfs.mount: Deactivated successfully. Apr 16 02:20:04.456487 kubelet[2905]: I0416 02:20:04.455630 2905 scope.go:117] "RemoveContainer" containerID="bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" Apr 16 02:20:04.456487 kubelet[2905]: E0416 02:20:04.456035 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:04.480523 kubelet[2905]: E0416 02:20:04.469151 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:20:04.481423 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:57814.service - OpenSSH per-connection server daemon (10.0.0.1:57814). Apr 16 02:20:05.237645 kubelet[2905]: E0416 02:20:05.237186 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:05.684177 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 57814 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:05.686431 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:06.006087 systemd-logind[1559]: New session 25 of user core. Apr 16 02:20:06.065060 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 02:20:06.590891 kubelet[2905]: I0416 02:20:06.581441 2905 scope.go:117] "RemoveContainer" containerID="5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" Apr 16 02:20:06.590891 kubelet[2905]: I0416 02:20:06.587352 2905 scope.go:117] "RemoveContainer" containerID="a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590" Apr 16 02:20:06.603619 kubelet[2905]: E0416 02:20:06.597294 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:06.634194 kubelet[2905]: E0416 02:20:06.621464 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:20:06.913961 containerd[1572]: time="2026-04-16T02:20:06.895060432Z" level=info msg="RemoveContainer for \"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\"" Apr 16 02:20:07.084373 containerd[1572]: time="2026-04-16T02:20:07.084112131Z" level=info msg="RemoveContainer for \"a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590\" returns successfully" Apr 16 02:20:07.805976 kubelet[2905]: I0416 02:20:07.803379 2905 scope.go:117] "RemoveContainer" containerID="5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" Apr 16 02:20:07.805976 kubelet[2905]: E0416 02:20:07.804427 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:07.899820 kubelet[2905]: E0416 02:20:07.898481 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:20:08.897970 kubelet[2905]: I0416 02:20:08.881583 2905 scope.go:117] "RemoveContainer" containerID="5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" Apr 16 02:20:08.897970 kubelet[2905]: E0416 02:20:08.884165 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:08.897970 kubelet[2905]: E0416 02:20:08.885006 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:20:10.127968 kubelet[2905]: E0416 02:20:10.127333 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:14.302121 sshd[4648]: Connection closed by 10.0.0.1 port 57814 Apr 16 02:20:14.323102 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Apr 16 02:20:14.440524 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:57814.service: Deactivated successfully. Apr 16 02:20:14.594490 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 02:20:14.597744 systemd[1]: session-25.scope: Consumed 5.552s CPU time, 15.7M memory peak. Apr 16 02:20:14.626287 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Apr 16 02:20:14.705056 systemd-logind[1559]: Removed session 25. Apr 16 02:20:19.129986 kubelet[2905]: I0416 02:20:19.129099 2905 scope.go:117] "RemoveContainer" containerID="bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" Apr 16 02:20:19.129986 kubelet[2905]: E0416 02:20:19.129524 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:19.140673 kubelet[2905]: E0416 02:20:19.137525 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:20:19.703309 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:51558.service - OpenSSH per-connection server daemon (10.0.0.1:51558). Apr 16 02:20:20.128730 kubelet[2905]: I0416 02:20:20.128273 2905 scope.go:117] "RemoveContainer" containerID="5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" Apr 16 02:20:20.131310 kubelet[2905]: E0416 02:20:20.130909 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:20.168783 sshd[4665]: Accepted publickey for core from 10.0.0.1 port 51558 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:20.182503 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:20.376141 systemd-logind[1559]: New session 26 of user core. Apr 16 02:20:20.396995 containerd[1572]: time="2026-04-16T02:20:20.394141230Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 16 02:20:20.401069 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 02:20:20.605223 containerd[1572]: time="2026-04-16T02:20:20.603093478Z" level=info msg="Container e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:20:20.702140 containerd[1572]: time="2026-04-16T02:20:20.697972903Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\"" Apr 16 02:20:20.851436 containerd[1572]: time="2026-04-16T02:20:20.851254116Z" level=info msg="StartContainer for \"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\"" Apr 16 02:20:20.865430 containerd[1572]: time="2026-04-16T02:20:20.865184680Z" level=info msg="connecting to shim e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:20:22.434403 systemd[1]: Started cri-containerd-e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98.scope - libcontainer container e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98. Apr 16 02:20:24.694771 containerd[1572]: time="2026-04-16T02:20:24.694317356Z" level=error msg="get state for e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" error="context deadline exceeded" Apr 16 02:20:24.694771 containerd[1572]: time="2026-04-16T02:20:24.694497121Z" level=warning msg="unknown status" status=0 Apr 16 02:20:26.709370 containerd[1572]: time="2026-04-16T02:20:26.708542878Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:20:27.137073 containerd[1572]: time="2026-04-16T02:20:27.136356458Z" level=info msg="StartContainer for \"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\" returns successfully" Apr 16 02:20:27.493133 sshd[4668]: Connection closed by 10.0.0.1 port 51558 Apr 16 02:20:27.483309 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Apr 16 02:20:27.625158 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:51558.service: Deactivated successfully. Apr 16 02:20:27.668497 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 02:20:27.669989 systemd[1]: session-26.scope: Consumed 3.975s CPU time, 17.8M memory peak. Apr 16 02:20:27.679676 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Apr 16 02:20:27.690436 systemd-logind[1559]: Removed session 26. Apr 16 02:20:27.826082 kubelet[2905]: E0416 02:20:27.825417 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:29.096760 kubelet[2905]: E0416 02:20:29.095981 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:31.105936 kubelet[2905]: I0416 02:20:31.103169 2905 scope.go:117] "RemoveContainer" containerID="bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" Apr 16 02:20:31.111296 kubelet[2905]: E0416 02:20:31.111269 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:31.114509 kubelet[2905]: E0416 02:20:31.112525 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:31.201134 containerd[1572]: time="2026-04-16T02:20:31.200878743Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 16 02:20:31.338950 containerd[1572]: time="2026-04-16T02:20:31.336280240Z" level=info msg="Container 6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:20:31.430097 containerd[1572]: time="2026-04-16T02:20:31.425428347Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\"" Apr 16 02:20:31.605026 containerd[1572]: time="2026-04-16T02:20:31.604754825Z" level=info msg="StartContainer for \"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\"" Apr 16 02:20:31.635193 containerd[1572]: time="2026-04-16T02:20:31.632228888Z" level=info msg="connecting to shim 6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:20:32.134579 systemd[1]: Started cri-containerd-6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85.scope - libcontainer container 6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85. Apr 16 02:20:32.741325 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:58218.service - OpenSSH per-connection server daemon (10.0.0.1:58218). Apr 16 02:20:33.042189 kubelet[2905]: E0416 02:20:33.041254 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:33.703026 containerd[1572]: time="2026-04-16T02:20:33.702828748Z" level=info msg="StartContainer for \"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\" returns successfully" Apr 16 02:20:34.326297 sshd[4739]: Accepted publickey for core from 10.0.0.1 port 58218 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:34.338235 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:34.415037 systemd-logind[1559]: New session 27 of user core. Apr 16 02:20:34.528184 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 02:20:34.723504 kubelet[2905]: E0416 02:20:34.720715 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:38.096994 kubelet[2905]: E0416 02:20:38.092407 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:38.147604 sshd[4754]: Connection closed by 10.0.0.1 port 58218 Apr 16 02:20:38.156934 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Apr 16 02:20:38.278505 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:58218.service: Deactivated successfully. Apr 16 02:20:38.379052 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 02:20:38.380437 systemd[1]: session-27.scope: Consumed 2.122s CPU time, 14.9M memory peak. Apr 16 02:20:38.439121 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Apr 16 02:20:38.459501 systemd-logind[1559]: Removed session 27. Apr 16 02:20:40.314918 containerd[1572]: time="2026-04-16T02:20:40.313915497Z" level=warning msg="container event discarded" container=4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c type=CONTAINER_CREATED_EVENT Apr 16 02:20:40.314918 containerd[1572]: time="2026-04-16T02:20:40.314148821Z" level=warning msg="container event discarded" container=4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c type=CONTAINER_STARTED_EVENT Apr 16 02:20:43.132927 kubelet[2905]: E0416 02:20:43.132174 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:43.354398 systemd[1]: Started sshd@27-10.0.0.34:22-10.0.0.1:41958.service - OpenSSH per-connection server daemon (10.0.0.1:41958). Apr 16 02:20:43.816915 kubelet[2905]: E0416 02:20:43.813894 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:44.253366 kubelet[2905]: E0416 02:20:44.253202 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:44.255319 kubelet[2905]: E0416 02:20:44.255298 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:20:44.321977 sshd[4774]: Accepted publickey for core from 10.0.0.1 port 41958 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:44.416774 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:44.590487 systemd-logind[1559]: New session 28 of user core. Apr 16 02:20:44.703222 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 02:20:47.165330 containerd[1572]: time="2026-04-16T02:20:47.164839525Z" level=warning msg="container event discarded" container=51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6 type=CONTAINER_CREATED_EVENT Apr 16 02:20:47.165330 containerd[1572]: time="2026-04-16T02:20:47.165220642Z" level=warning msg="container event discarded" container=51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6 type=CONTAINER_STARTED_EVENT Apr 16 02:20:47.640327 sshd[4777]: Connection closed by 10.0.0.1 port 41958 Apr 16 02:20:47.645193 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Apr 16 02:20:47.740731 systemd[1]: sshd@27-10.0.0.34:22-10.0.0.1:41958.service: Deactivated successfully. Apr 16 02:20:47.762616 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 02:20:47.765753 systemd[1]: session-28.scope: Consumed 1.852s CPU time, 17.1M memory peak. Apr 16 02:20:47.776918 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Apr 16 02:20:47.779179 systemd-logind[1559]: Removed session 28. Apr 16 02:20:48.486259 containerd[1572]: time="2026-04-16T02:20:48.485426443Z" level=warning msg="container event discarded" container=8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32 type=CONTAINER_CREATED_EVENT Apr 16 02:20:48.493015 containerd[1572]: time="2026-04-16T02:20:48.492543285Z" level=warning msg="container event discarded" container=8b438716b883e8fa413da5123edde775ed7f82bd8e61f35b29d76393fd7b2b32 type=CONTAINER_STARTED_EVENT Apr 16 02:20:49.734741 containerd[1572]: time="2026-04-16T02:20:49.732121339Z" level=warning msg="container event discarded" container=a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331 type=CONTAINER_CREATED_EVENT Apr 16 02:20:52.869973 systemd[1]: Started sshd@28-10.0.0.34:22-10.0.0.1:48232.service - OpenSSH per-connection server daemon (10.0.0.1:48232). Apr 16 02:20:53.018016 containerd[1572]: time="2026-04-16T02:20:53.015184183Z" level=warning msg="container event discarded" container=6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129 type=CONTAINER_STOPPED_EVENT Apr 16 02:20:53.503780 containerd[1572]: time="2026-04-16T02:20:53.502173278Z" level=warning msg="container event discarded" container=a6152aff14af701d496a9092d7169e915f3cc99fcf9e9b9ff3a43473450f4331 type=CONTAINER_STARTED_EVENT Apr 16 02:20:53.522994 sshd[4792]: Accepted publickey for core from 10.0.0.1 port 48232 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:53.533408 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:53.688377 systemd-logind[1559]: New session 29 of user core. Apr 16 02:20:53.735619 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 02:20:56.985925 sshd[4795]: Connection closed by 10.0.0.1 port 48232 Apr 16 02:20:56.990383 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Apr 16 02:20:57.178798 containerd[1572]: time="2026-04-16T02:20:57.178444162Z" level=warning msg="container event discarded" container=42f8ee2c39c1ece526ac37473a8a0bd280997855c294400c28ad5b3842ee2228 type=CONTAINER_DELETED_EVENT Apr 16 02:20:57.341292 systemd[1]: sshd@28-10.0.0.34:22-10.0.0.1:48232.service: Deactivated successfully. Apr 16 02:20:57.380224 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 02:20:57.383232 systemd[1]: session-29.scope: Consumed 2.093s CPU time, 15.4M memory peak. Apr 16 02:20:57.397066 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Apr 16 02:20:57.541450 systemd[1]: Started sshd@29-10.0.0.34:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). Apr 16 02:20:57.575220 systemd-logind[1559]: Removed session 29. Apr 16 02:20:58.033983 containerd[1572]: time="2026-04-16T02:20:58.031862515Z" level=warning msg="container event discarded" container=a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590 type=CONTAINER_STOPPED_EVENT Apr 16 02:20:58.188974 containerd[1572]: time="2026-04-16T02:20:58.188853744Z" level=warning msg="container event discarded" container=b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d type=CONTAINER_CREATED_EVENT Apr 16 02:20:58.325443 sshd[4810]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:20:58.332421 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:20:58.505179 systemd-logind[1559]: New session 30 of user core. Apr 16 02:20:58.533290 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 02:20:58.985713 containerd[1572]: time="2026-04-16T02:20:58.985375661Z" level=warning msg="container event discarded" container=6b4ea757f23e306ae6bfd9f988c8d5844ab4e5030a8da90a83e19d0058779129 type=CONTAINER_DELETED_EVENT Apr 16 02:20:59.731041 containerd[1572]: time="2026-04-16T02:20:59.710506858Z" level=warning msg="container event discarded" container=5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4 type=CONTAINER_CREATED_EVENT Apr 16 02:20:59.731041 containerd[1572]: time="2026-04-16T02:20:59.710693852Z" level=warning msg="container event discarded" container=b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d type=CONTAINER_STARTED_EVENT Apr 16 02:21:01.765694 containerd[1572]: time="2026-04-16T02:21:01.744905705Z" level=warning msg="container event discarded" container=5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4 type=CONTAINER_STARTED_EVENT Apr 16 02:21:03.197898 kubelet[2905]: E0416 02:21:03.186930 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:03.597255 containerd[1572]: time="2026-04-16T02:21:03.591786876Z" level=warning msg="container event discarded" container=846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4 type=CONTAINER_CREATED_EVENT Apr 16 02:21:06.113942 containerd[1572]: time="2026-04-16T02:21:06.113115517Z" level=warning msg="container event discarded" container=846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4 type=CONTAINER_STARTED_EVENT Apr 16 02:21:08.200218 sshd[4815]: Connection closed by 10.0.0.1 port 57464 Apr 16 02:21:08.208099 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:08.581797 systemd[1]: sshd@29-10.0.0.34:22-10.0.0.1:57464.service: Deactivated successfully. Apr 16 02:21:08.619031 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 02:21:08.619394 systemd[1]: session-30.scope: Consumed 4.599s CPU time, 25.4M memory peak. Apr 16 02:21:08.628710 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. Apr 16 02:21:08.725368 systemd-logind[1559]: Removed session 30. Apr 16 02:21:08.773917 systemd[1]: Started sshd@30-10.0.0.34:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Apr 16 02:21:09.677433 sshd[4826]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:21:09.696765 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:21:09.843294 systemd-logind[1559]: New session 31 of user core. Apr 16 02:21:10.037160 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 02:21:11.974961 sshd[4830]: Connection closed by 10.0.0.1 port 37248 Apr 16 02:21:11.980869 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:12.022372 systemd[1]: sshd@30-10.0.0.34:22-10.0.0.1:37248.service: Deactivated successfully. Apr 16 02:21:12.048010 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 02:21:12.048752 systemd[1]: session-31.scope: Consumed 1.128s CPU time, 18.4M memory peak. Apr 16 02:21:12.062159 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. Apr 16 02:21:12.087909 systemd-logind[1559]: Removed session 31. Apr 16 02:21:17.131701 kubelet[2905]: E0416 02:21:17.131051 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:17.506133 systemd[1]: Started sshd@31-10.0.0.34:22-10.0.0.1:42834.service - OpenSSH per-connection server daemon (10.0.0.1:42834). Apr 16 02:21:18.590875 sshd[4845]: Accepted publickey for core from 10.0.0.1 port 42834 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:21:18.614497 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:21:18.886802 systemd-logind[1559]: New session 32 of user core. Apr 16 02:21:18.970418 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 02:21:21.428023 sshd[4848]: Connection closed by 10.0.0.1 port 42834 Apr 16 02:21:21.436918 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:21.567250 systemd[1]: sshd@31-10.0.0.34:22-10.0.0.1:42834.service: Deactivated successfully. Apr 16 02:21:21.594926 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 02:21:21.595261 systemd[1]: session-32.scope: Consumed 1.106s CPU time, 17.4M memory peak. Apr 16 02:21:21.612287 systemd-logind[1559]: Session 32 logged out. Waiting for processes to exit. Apr 16 02:21:21.645391 systemd-logind[1559]: Removed session 32. Apr 16 02:21:23.588055 systemd-networkd[1492]: lxc45b69f38eba1: Link DOWN Apr 16 02:21:23.595284 systemd-networkd[1492]: lxc45b69f38eba1: Lost carrier Apr 16 02:21:24.001084 containerd[1572]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:21:24.017297 containerd[1572]: time="2026-04-16T02:21:24.016928449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f965348f61f706333985f0583d858237f6c246bd4f16a0f8a44cd58370dd0\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:21:24.023456 kubelet[2905]: E0416 02:21:24.023105 2905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f965348f61f706333985f0583d858237f6c246bd4f16a0f8a44cd58370dd0\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:21:24.023456 kubelet[2905]: E0416 02:21:24.023272 2905 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f965348f61f706333985f0583d858237f6c246bd4f16a0f8a44cd58370dd0\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:21:24.023456 kubelet[2905]: E0416 02:21:24.023340 2905 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f965348f61f706333985f0583d858237f6c246bd4f16a0f8a44cd58370dd0\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-wmdr2" Apr 16 02:21:24.026510 systemd[1]: run-netns-cni\x2d5a659ba5\x2dfd08\x2de2ca\x2d1970\x2db1f54f989e08.mount: Deactivated successfully. Apr 16 02:21:24.027712 systemd-networkd[1492]: lxcd4a81b24dd5f: Link DOWN Apr 16 02:21:24.027715 systemd-networkd[1492]: lxcd4a81b24dd5f: Lost carrier Apr 16 02:21:24.032796 kubelet[2905]: E0416 02:21:24.023490 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wmdr2_kube-system(224b1be2-a057-4a43-9d23-a0957387a459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wmdr2_kube-system(224b1be2-a057-4a43-9d23-a0957387a459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"358f965348f61f706333985f0583d858237f6c246bd4f16a0f8a44cd58370dd0\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-66bc5c9577-wmdr2" podUID="224b1be2-a057-4a43-9d23-a0957387a459" Apr 16 02:21:24.647702 containerd[1572]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:21:24.684996 containerd[1572]: time="2026-04-16T02:21:24.682676336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9905745475594e4d8664bf19c576973e9516d5f9cb3130375fd437c6dd720e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:21:24.683683 systemd[1]: run-netns-cni\x2dcf74a007\x2d9332\x2dea6b\x2dd35b\x2de3625e26f6a6.mount: Deactivated successfully. Apr 16 02:21:24.688287 kubelet[2905]: E0416 02:21:24.688069 2905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9905745475594e4d8664bf19c576973e9516d5f9cb3130375fd437c6dd720e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 16 02:21:24.688287 kubelet[2905]: E0416 02:21:24.688189 2905 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9905745475594e4d8664bf19c576973e9516d5f9cb3130375fd437c6dd720e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:21:24.688739 kubelet[2905]: E0416 02:21:24.688213 2905 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9905745475594e4d8664bf19c576973e9516d5f9cb3130375fd437c6dd720e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-66bc5c9577-8cn7r" Apr 16 02:21:24.689167 kubelet[2905]: E0416 02:21:24.689120 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8cn7r_kube-system(0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8cn7r_kube-system(0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a9905745475594e4d8664bf19c576973e9516d5f9cb3130375fd437c6dd720e\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-66bc5c9577-8cn7r" podUID="0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b" Apr 16 02:21:26.525272 systemd[1]: Started sshd@32-10.0.0.34:22-10.0.0.1:43606.service - OpenSSH per-connection server daemon (10.0.0.1:43606). Apr 16 02:21:26.934972 sshd[4896]: Accepted publickey for core from 10.0.0.1 port 43606 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:21:26.919393 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:21:27.023614 systemd-logind[1559]: New session 33 of user core. Apr 16 02:21:27.085052 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 02:21:27.701952 sshd[4899]: Connection closed by 10.0.0.1 port 43606 Apr 16 02:21:27.716759 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:27.747397 systemd[1]: sshd@32-10.0.0.34:22-10.0.0.1:43606.service: Deactivated successfully. Apr 16 02:21:27.750818 systemd-logind[1559]: Session 33 logged out. Waiting for processes to exit. Apr 16 02:21:27.774219 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 02:21:27.790644 systemd-logind[1559]: Removed session 33. Apr 16 02:21:30.250776 systemd-networkd[1492]: lxc_health: Link DOWN Apr 16 02:21:30.250786 systemd-networkd[1492]: lxc_health: Lost carrier Apr 16 02:21:30.573631 systemd-networkd[1492]: lxc_health: Link UP Apr 16 02:21:30.591671 systemd-networkd[1492]: lxc_health: Gained carrier Apr 16 02:21:32.404039 systemd-networkd[1492]: lxc_health: Gained IPv6LL Apr 16 02:21:33.049047 systemd[1]: Started sshd@33-10.0.0.34:22-10.0.0.1:43608.service - OpenSSH per-connection server daemon (10.0.0.1:43608). Apr 16 02:21:33.571477 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 43608 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:21:33.603949 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:21:33.926728 systemd-logind[1559]: New session 34 of user core. Apr 16 02:21:34.011978 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 16 02:21:37.106069 kubelet[2905]: E0416 02:21:37.105487 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:38.207538 kubelet[2905]: E0416 02:21:38.205788 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:38.259794 containerd[1572]: time="2026-04-16T02:21:38.259470436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,}" Apr 16 02:21:39.234894 kubelet[2905]: E0416 02:21:39.233112 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:39.239820 containerd[1572]: time="2026-04-16T02:21:39.237931793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,}" Apr 16 02:21:40.593089 systemd-networkd[1492]: lxc80dfe93230d7: Link UP Apr 16 02:21:40.660483 kernel: eth0: renamed from tmp394b9 Apr 16 02:21:40.808474 systemd-networkd[1492]: lxc80dfe93230d7: Gained carrier Apr 16 02:21:41.872005 systemd-networkd[1492]: lxc80dfe93230d7: Gained IPv6LL Apr 16 02:21:42.143215 systemd-networkd[1492]: lxcd584a105c3ca: Link UP Apr 16 02:21:42.179753 kernel: eth0: renamed from tmp55fef Apr 16 02:21:42.183697 sshd[4943]: Connection closed by 10.0.0.1 port 43608 Apr 16 02:21:42.201305 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:42.315870 systemd[1]: sshd@33-10.0.0.34:22-10.0.0.1:43608.service: Deactivated successfully. Apr 16 02:21:42.345951 systemd-networkd[1492]: lxcd584a105c3ca: Gained carrier Apr 16 02:21:42.431883 systemd[1]: session-34.scope: Deactivated successfully. Apr 16 02:21:42.469630 systemd[1]: session-34.scope: Consumed 4.594s CPU time, 15.9M memory peak. Apr 16 02:21:42.597639 systemd-logind[1559]: Session 34 logged out. Waiting for processes to exit. Apr 16 02:21:42.684761 systemd-logind[1559]: Removed session 34. Apr 16 02:21:43.792990 systemd-networkd[1492]: lxcd584a105c3ca: Gained IPv6LL Apr 16 02:21:43.942973 containerd[1572]: time="2026-04-16T02:21:43.942018821Z" level=info msg="connecting to shim 394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e" address="unix:///run/containerd/s/6fb61340d6e65f0abbeb6757aa0b1e04e0240f7f66623a005e4405364ec7e598" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:21:44.687861 kubelet[2905]: E0416 02:21:44.674892 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.557s" Apr 16 02:21:45.636516 systemd[1]: Started cri-containerd-394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e.scope - libcontainer container 394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e. Apr 16 02:21:45.798536 kubelet[2905]: E0416 02:21:45.797665 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.119s" Apr 16 02:21:46.505115 containerd[1572]: time="2026-04-16T02:21:46.503998007Z" level=info msg="connecting to shim 55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010" address="unix:///run/containerd/s/68c90d0a339f8e3f62c7aa54626b7f73a165f7228db0de6a4e6e74c58d904afd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:21:46.856499 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:21:47.840453 systemd[1]: Started sshd@34-10.0.0.34:22-10.0.0.1:51930.service - OpenSSH per-connection server daemon (10.0.0.1:51930). Apr 16 02:21:47.987028 containerd[1572]: time="2026-04-16T02:21:47.943391917Z" level=error msg="get state for 394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e" error="context deadline exceeded" Apr 16 02:21:47.998117 containerd[1572]: time="2026-04-16T02:21:47.995967731Z" level=warning msg="unknown status" status=0 Apr 16 02:21:48.028296 systemd[1]: Started cri-containerd-55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010.scope - libcontainer container 55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010. Apr 16 02:21:48.857690 containerd[1572]: time="2026-04-16T02:21:48.844218153Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:21:48.927677 systemd[1]: cri-containerd-6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85.scope: Deactivated successfully. Apr 16 02:21:48.933496 systemd[1]: cri-containerd-6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85.scope: Consumed 10.107s CPU time, 44.8M memory peak, 692K read from disk. Apr 16 02:21:49.167230 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 51930 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:21:49.215966 containerd[1572]: time="2026-04-16T02:21:49.199528444Z" level=info msg="received container exit event container_id:\"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\" id:\"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\" pid:4732 exit_status:1 exited_at:{seconds:1776306109 nanos:176465726}" Apr 16 02:21:49.261487 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:21:49.603155 systemd-logind[1559]: New session 35 of user core. Apr 16 02:21:49.681997 kubelet[2905]: E0416 02:21:49.680408 2905 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2461c3_3ae2_4eb9_b9b7_1329659c8f8b.slice/cri-containerd-394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e.scope\": RecentStats: unable to find data in memory cache]" Apr 16 02:21:49.731686 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 16 02:21:50.040986 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:21:50.048973 containerd[1572]: time="2026-04-16T02:21:50.047106778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8cn7r,Uid:0c2461c3-3ae2-4eb9-b9b7-1329659c8f8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e\"" Apr 16 02:21:50.301116 kubelet[2905]: E0416 02:21:50.296172 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:50.874893 containerd[1572]: time="2026-04-16T02:21:50.864521656Z" level=info msg="CreateContainer within sandbox \"394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:21:51.371871 containerd[1572]: time="2026-04-16T02:21:51.370436782Z" level=info msg="Container fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:21:51.426243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690164070.mount: Deactivated successfully. Apr 16 02:21:51.501792 containerd[1572]: time="2026-04-16T02:21:51.497050513Z" level=error msg="get state for 55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010" error="context deadline exceeded" Apr 16 02:21:51.501792 containerd[1572]: time="2026-04-16T02:21:51.497213835Z" level=warning msg="unknown status" status=0 Apr 16 02:21:51.751292 containerd[1572]: time="2026-04-16T02:21:51.748501210Z" level=info msg="CreateContainer within sandbox \"394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6\"" Apr 16 02:21:51.911468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85-rootfs.mount: Deactivated successfully. Apr 16 02:21:52.077314 containerd[1572]: time="2026-04-16T02:21:52.064137339Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:21:52.111444 containerd[1572]: time="2026-04-16T02:21:52.081006635Z" level=info msg="StartContainer for \"fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6\"" Apr 16 02:21:52.211337 containerd[1572]: time="2026-04-16T02:21:52.206990059Z" level=info msg="connecting to shim fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6" address="unix:///run/containerd/s/6fb61340d6e65f0abbeb6757aa0b1e04e0240f7f66623a005e4405364ec7e598" protocol=ttrpc version=3 Apr 16 02:21:53.278369 kubelet[2905]: E0416 02:21:53.271370 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.176s" Apr 16 02:21:53.316416 kubelet[2905]: E0416 02:21:53.315702 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:53.603143 kubelet[2905]: I0416 02:21:53.601707 2905 scope.go:117] "RemoveContainer" containerID="bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90" Apr 16 02:21:53.687186 kubelet[2905]: I0416 02:21:53.687091 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:21:53.687186 kubelet[2905]: E0416 02:21:53.687230 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:53.712142 kubelet[2905]: E0416 02:21:53.687540 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:21:53.785839 containerd[1572]: time="2026-04-16T02:21:53.784074486Z" level=info msg="RemoveContainer for \"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\"" Apr 16 02:21:53.835922 containerd[1572]: time="2026-04-16T02:21:53.835236656Z" level=info msg="RemoveContainer for \"bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90\" returns successfully" Apr 16 02:21:54.017518 containerd[1572]: time="2026-04-16T02:21:54.016839739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wmdr2,Uid:224b1be2-a057-4a43-9d23-a0957387a459,Namespace:kube-system,Attempt:0,} returns sandbox id \"55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010\"" Apr 16 02:21:54.117408 kubelet[2905]: E0416 02:21:54.097501 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:54.350990 systemd[1]: Started cri-containerd-fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6.scope - libcontainer container fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6. Apr 16 02:21:54.626439 containerd[1572]: time="2026-04-16T02:21:54.625025922Z" level=info msg="CreateContainer within sandbox \"55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:21:54.820694 containerd[1572]: time="2026-04-16T02:21:54.819301859Z" level=info msg="Container 2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:21:55.025090 containerd[1572]: time="2026-04-16T02:21:54.938412019Z" level=info msg="CreateContainer within sandbox \"55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e\"" Apr 16 02:21:55.048950 containerd[1572]: time="2026-04-16T02:21:55.046738623Z" level=info msg="StartContainer for \"2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e\"" Apr 16 02:21:55.062670 containerd[1572]: time="2026-04-16T02:21:55.059595270Z" level=info msg="connecting to shim 2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e" address="unix:///run/containerd/s/68c90d0a339f8e3f62c7aa54626b7f73a165f7228db0de6a4e6e74c58d904afd" protocol=ttrpc version=3 Apr 16 02:21:55.674368 kubelet[2905]: I0416 02:21:55.673886 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:21:55.677745 kubelet[2905]: E0416 02:21:55.675348 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:55.679736 kubelet[2905]: E0416 02:21:55.678526 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:21:56.780778 sshd[5097]: Connection closed by 10.0.0.1 port 51930 Apr 16 02:21:56.767584 systemd[1]: Started cri-containerd-2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e.scope - libcontainer container 2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e. Apr 16 02:21:56.781507 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Apr 16 02:21:56.941125 systemd[1]: sshd@34-10.0.0.34:22-10.0.0.1:51930.service: Deactivated successfully. Apr 16 02:21:56.988715 systemd[1]: session-35.scope: Deactivated successfully. Apr 16 02:21:56.993401 systemd[1]: session-35.scope: Consumed 4.129s CPU time, 18.9M memory peak. Apr 16 02:21:57.025391 systemd-logind[1559]: Session 35 logged out. Waiting for processes to exit. Apr 16 02:21:57.129629 systemd-logind[1559]: Removed session 35. Apr 16 02:21:57.352998 containerd[1572]: time="2026-04-16T02:21:57.351424522Z" level=info msg="StartContainer for \"fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6\" returns successfully" Apr 16 02:21:57.711539 systemd[1]: cri-containerd-e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98.scope: Deactivated successfully. Apr 16 02:21:57.721209 systemd[1]: cri-containerd-e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98.scope: Consumed 8.947s CPU time, 22.6M memory peak, 868K read from disk. Apr 16 02:21:57.828052 containerd[1572]: time="2026-04-16T02:21:57.827826117Z" level=info msg="received container exit event container_id:\"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\" id:\"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\" pid:4691 exit_status:1 exited_at:{seconds:1776306117 nanos:717242660}" Apr 16 02:21:58.118839 kubelet[2905]: E0416 02:21:58.118310 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:58.427800 containerd[1572]: time="2026-04-16T02:21:58.422160826Z" level=info msg="StartContainer for \"2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e\" returns successfully" Apr 16 02:21:58.931416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98-rootfs.mount: Deactivated successfully. Apr 16 02:21:59.271824 kubelet[2905]: I0416 02:21:59.271425 2905 scope.go:117] "RemoveContainer" containerID="5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4" Apr 16 02:21:59.276205 kubelet[2905]: I0416 02:21:59.274426 2905 scope.go:117] "RemoveContainer" containerID="e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" Apr 16 02:21:59.276702 kubelet[2905]: E0416 02:21:59.276653 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:59.277027 kubelet[2905]: E0416 02:21:59.276828 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:21:59.328701 containerd[1572]: time="2026-04-16T02:21:59.328457353Z" level=info msg="RemoveContainer for \"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\"" Apr 16 02:21:59.335958 kubelet[2905]: E0416 02:21:59.332684 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:59.348143 kubelet[2905]: E0416 02:21:59.345044 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:21:59.485533 containerd[1572]: time="2026-04-16T02:21:59.484887771Z" level=info msg="RemoveContainer for \"5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4\" returns successfully" Apr 16 02:22:00.704847 kubelet[2905]: I0416 02:22:00.700490 2905 scope.go:117] "RemoveContainer" containerID="e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" Apr 16 02:22:00.717648 kubelet[2905]: E0416 02:22:00.717584 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:00.787219 kubelet[2905]: E0416 02:22:00.786957 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:00.797257 kubelet[2905]: E0416 02:22:00.796218 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:22:00.799356 kubelet[2905]: E0416 02:22:00.799209 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:01.986921 systemd[1]: Started sshd@35-10.0.0.34:22-10.0.0.1:39220.service - OpenSSH per-connection server daemon (10.0.0.1:39220). Apr 16 02:22:02.849676 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 39220 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:02.858283 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:03.058899 systemd-logind[1559]: New session 36 of user core. Apr 16 02:22:03.125004 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 16 02:22:04.510959 kubelet[2905]: I0416 02:22:04.510233 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wmdr2" podStartSLOduration=397.510207283 podStartE2EDuration="6m37.510207283s" podCreationTimestamp="2026-04-16 02:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:22:03.80544096 +0000 UTC m=+489.097329521" watchObservedRunningTime="2026-04-16 02:22:04.510207283 +0000 UTC m=+489.802095844" Apr 16 02:22:07.106427 sshd[5227]: Connection closed by 10.0.0.1 port 39220 Apr 16 02:22:07.114061 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Apr 16 02:22:07.241714 systemd[1]: sshd@35-10.0.0.34:22-10.0.0.1:39220.service: Deactivated successfully. Apr 16 02:22:07.312076 systemd[1]: session-36.scope: Deactivated successfully. Apr 16 02:22:07.316305 systemd[1]: session-36.scope: Consumed 1.832s CPU time, 17.1M memory peak. Apr 16 02:22:07.386961 systemd-logind[1559]: Session 36 logged out. Waiting for processes to exit. Apr 16 02:22:07.410275 systemd-logind[1559]: Removed session 36. Apr 16 02:22:07.940163 kubelet[2905]: I0416 02:22:07.939625 2905 scope.go:117] "RemoveContainer" containerID="e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" Apr 16 02:22:07.987735 kubelet[2905]: E0416 02:22:07.945812 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:08.031170 kubelet[2905]: E0416 02:22:08.026762 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:22:10.294166 kubelet[2905]: I0416 02:22:10.292972 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:22:10.309331 kubelet[2905]: E0416 02:22:10.296196 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:10.330800 kubelet[2905]: E0416 02:22:10.329239 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:22:10.505782 kubelet[2905]: I0416 02:22:10.505120 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8cn7r" podStartSLOduration=400.505024959 podStartE2EDuration="6m40.505024959s" podCreationTimestamp="2026-04-16 02:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:22:09.288161174 +0000 UTC m=+494.580049735" watchObservedRunningTime="2026-04-16 02:22:10.505024959 +0000 UTC m=+495.796913512" Apr 16 02:22:10.690732 kubelet[2905]: E0416 02:22:10.688513 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:10.885213 kubelet[2905]: E0416 02:22:10.884813 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:12.473962 systemd[1]: Started sshd@36-10.0.0.34:22-10.0.0.1:38844.service - OpenSSH per-connection server daemon (10.0.0.1:38844). Apr 16 02:22:13.078876 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 38844 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:13.083807 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:13.295340 systemd-logind[1559]: New session 37 of user core. Apr 16 02:22:13.469848 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 16 02:22:15.421076 containerd[1572]: time="2026-04-16T02:22:15.417142294Z" level=warning msg="container event discarded" container=b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d type=CONTAINER_STOPPED_EVENT Apr 16 02:22:15.488989 containerd[1572]: time="2026-04-16T02:22:15.488637113Z" level=warning msg="container event discarded" container=8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17 type=CONTAINER_CREATED_EVENT Apr 16 02:22:16.689714 sshd[5250]: Connection closed by 10.0.0.1 port 38844 Apr 16 02:22:16.692274 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Apr 16 02:22:16.810591 systemd[1]: sshd@36-10.0.0.34:22-10.0.0.1:38844.service: Deactivated successfully. Apr 16 02:22:16.886237 systemd[1]: session-37.scope: Deactivated successfully. Apr 16 02:22:16.891183 systemd[1]: session-37.scope: Consumed 2.112s CPU time, 18.4M memory peak. Apr 16 02:22:16.925996 systemd-logind[1559]: Session 37 logged out. Waiting for processes to exit. Apr 16 02:22:16.957063 systemd-logind[1559]: Removed session 37. Apr 16 02:22:17.486032 containerd[1572]: time="2026-04-16T02:22:17.483259376Z" level=warning msg="container event discarded" container=8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17 type=CONTAINER_STARTED_EVENT Apr 16 02:22:18.717452 containerd[1572]: time="2026-04-16T02:22:18.716914639Z" level=warning msg="container event discarded" container=8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17 type=CONTAINER_STOPPED_EVENT Apr 16 02:22:19.842591 containerd[1572]: time="2026-04-16T02:22:19.841819753Z" level=warning msg="container event discarded" container=1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563 type=CONTAINER_CREATED_EVENT Apr 16 02:22:20.233170 kubelet[2905]: I0416 02:22:20.225478 2905 scope.go:117] "RemoveContainer" containerID="e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" Apr 16 02:22:20.243146 kubelet[2905]: E0416 02:22:20.235337 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:20.341415 containerd[1572]: time="2026-04-16T02:22:20.339015415Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 16 02:22:20.440659 containerd[1572]: time="2026-04-16T02:22:20.438635078Z" level=info msg="Container 096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:22:20.542245 containerd[1572]: time="2026-04-16T02:22:20.531765485Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\"" Apr 16 02:22:20.560031 containerd[1572]: time="2026-04-16T02:22:20.558063378Z" level=info msg="StartContainer for \"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\"" Apr 16 02:22:20.606455 containerd[1572]: time="2026-04-16T02:22:20.606199609Z" level=info msg="connecting to shim 096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:22:20.818244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021347734.mount: Deactivated successfully. Apr 16 02:22:21.187722 containerd[1572]: time="2026-04-16T02:22:21.186056057Z" level=warning msg="container event discarded" container=1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563 type=CONTAINER_STARTED_EVENT Apr 16 02:22:22.013584 systemd[1]: Started cri-containerd-096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd.scope - libcontainer container 096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd. Apr 16 02:22:22.078857 systemd[1]: Started sshd@37-10.0.0.34:22-10.0.0.1:41330.service - OpenSSH per-connection server daemon (10.0.0.1:41330). Apr 16 02:22:22.700542 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 41330 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:22.718732 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:22.932173 systemd-logind[1559]: New session 38 of user core. Apr 16 02:22:22.987447 containerd[1572]: time="2026-04-16T02:22:22.984773644Z" level=warning msg="container event discarded" container=1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563 type=CONTAINER_STOPPED_EVENT Apr 16 02:22:23.036407 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 16 02:22:23.182889 containerd[1572]: time="2026-04-16T02:22:23.180224661Z" level=info msg="StartContainer for \"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\" returns successfully" Apr 16 02:22:23.261701 kubelet[2905]: E0416 02:22:23.256226 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:23.522738 containerd[1572]: time="2026-04-16T02:22:23.519685161Z" level=warning msg="container event discarded" container=b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a type=CONTAINER_CREATED_EVENT Apr 16 02:22:23.798707 kubelet[2905]: E0416 02:22:23.769604 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:24.094656 kubelet[2905]: I0416 02:22:24.091200 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:22:24.100610 kubelet[2905]: E0416 02:22:24.096615 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:24.107834 kubelet[2905]: E0416 02:22:24.106348 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:22:24.682919 sshd[5291]: Connection closed by 10.0.0.1 port 41330 Apr 16 02:22:24.686572 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Apr 16 02:22:24.808398 systemd[1]: sshd@37-10.0.0.34:22-10.0.0.1:41330.service: Deactivated successfully. Apr 16 02:22:24.828909 containerd[1572]: time="2026-04-16T02:22:24.824784792Z" level=warning msg="container event discarded" container=bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90 type=CONTAINER_CREATED_EVENT Apr 16 02:22:24.889288 systemd[1]: session-38.scope: Deactivated successfully. Apr 16 02:22:24.914939 systemd-logind[1559]: Session 38 logged out. Waiting for processes to exit. Apr 16 02:22:24.926892 systemd-logind[1559]: Removed session 38. Apr 16 02:22:25.007712 kubelet[2905]: E0416 02:22:25.005975 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:26.514961 containerd[1572]: time="2026-04-16T02:22:26.514523374Z" level=warning msg="container event discarded" container=b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a type=CONTAINER_STARTED_EVENT Apr 16 02:22:26.966660 containerd[1572]: time="2026-04-16T02:22:26.962359111Z" level=warning msg="container event discarded" container=bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90 type=CONTAINER_STARTED_EVENT Apr 16 02:22:28.093432 containerd[1572]: time="2026-04-16T02:22:28.092953606Z" level=warning msg="container event discarded" container=b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a type=CONTAINER_STOPPED_EVENT Apr 16 02:22:29.976326 systemd[1]: Started sshd@38-10.0.0.34:22-10.0.0.1:57528.service - OpenSSH per-connection server daemon (10.0.0.1:57528). Apr 16 02:22:30.237798 containerd[1572]: time="2026-04-16T02:22:30.236121381Z" level=warning msg="container event discarded" container=768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7 type=CONTAINER_CREATED_EVENT Apr 16 02:22:30.786869 kubelet[2905]: E0416 02:22:30.785427 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:30.990208 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 57528 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:31.025110 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:31.306075 systemd-logind[1559]: New session 39 of user core. Apr 16 02:22:31.388442 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 16 02:22:31.651258 containerd[1572]: time="2026-04-16T02:22:31.649470090Z" level=warning msg="container event discarded" container=768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7 type=CONTAINER_STARTED_EVENT Apr 16 02:22:32.730861 containerd[1572]: time="2026-04-16T02:22:32.729145101Z" level=warning msg="container event discarded" container=768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7 type=CONTAINER_STOPPED_EVENT Apr 16 02:22:33.014591 kubelet[2905]: E0416 02:22:33.011310 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:33.511125 containerd[1572]: time="2026-04-16T02:22:33.510796280Z" level=warning msg="container event discarded" container=3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2 type=CONTAINER_CREATED_EVENT Apr 16 02:22:34.841073 containerd[1572]: time="2026-04-16T02:22:34.839348018Z" level=warning msg="container event discarded" container=3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2 type=CONTAINER_STARTED_EVENT Apr 16 02:22:36.597979 sshd[5319]: Connection closed by 10.0.0.1 port 57528 Apr 16 02:22:36.596283 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Apr 16 02:22:36.747113 systemd[1]: sshd@38-10.0.0.34:22-10.0.0.1:57528.service: Deactivated successfully. Apr 16 02:22:36.964689 systemd[1]: session-39.scope: Deactivated successfully. Apr 16 02:22:36.968217 systemd[1]: session-39.scope: Consumed 2.966s CPU time, 18.7M memory peak. Apr 16 02:22:36.986910 systemd-logind[1559]: Session 39 logged out. Waiting for processes to exit. Apr 16 02:22:37.029036 systemd-logind[1559]: Removed session 39. Apr 16 02:22:37.176078 kubelet[2905]: I0416 02:22:37.175731 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:22:37.176078 kubelet[2905]: E0416 02:22:37.176121 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:37.396517 containerd[1572]: time="2026-04-16T02:22:37.396224520Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 16 02:22:37.573958 containerd[1572]: time="2026-04-16T02:22:37.573219436Z" level=info msg="Container e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:22:37.829879 containerd[1572]: time="2026-04-16T02:22:37.829021198Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\"" Apr 16 02:22:37.962013 containerd[1572]: time="2026-04-16T02:22:37.960884637Z" level=info msg="StartContainer for \"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\"" Apr 16 02:22:37.974336 containerd[1572]: time="2026-04-16T02:22:37.968538094Z" level=info msg="connecting to shim e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:22:38.892470 systemd[1]: Started cri-containerd-e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77.scope - libcontainer container e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77. Apr 16 02:22:39.984932 containerd[1572]: time="2026-04-16T02:22:39.984495513Z" level=info msg="StartContainer for \"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\" returns successfully" Apr 16 02:22:41.330489 kubelet[2905]: E0416 02:22:41.330073 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:41.819660 systemd[1]: Started sshd@39-10.0.0.34:22-10.0.0.1:49176.service - OpenSSH per-connection server daemon (10.0.0.1:49176). Apr 16 02:22:42.709837 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 49176 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:42.741801 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:42.966613 systemd-logind[1559]: New session 40 of user core. Apr 16 02:22:43.079822 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 16 02:22:43.332430 kubelet[2905]: E0416 02:22:43.330151 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:43.909811 kubelet[2905]: E0416 02:22:43.909434 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:43.949798 kubelet[2905]: E0416 02:22:43.948898 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:44.265924 kubelet[2905]: E0416 02:22:44.265003 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:49.218998 sshd[5370]: Connection closed by 10.0.0.1 port 49176 Apr 16 02:22:49.223474 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Apr 16 02:22:49.263474 systemd-logind[1559]: Session 40 logged out. Waiting for processes to exit. Apr 16 02:22:49.295358 systemd[1]: sshd@39-10.0.0.34:22-10.0.0.1:49176.service: Deactivated successfully. Apr 16 02:22:49.358354 systemd[1]: session-40.scope: Deactivated successfully. Apr 16 02:22:49.360284 systemd[1]: session-40.scope: Consumed 4.353s CPU time, 16.6M memory peak. Apr 16 02:22:49.386575 systemd-logind[1559]: Removed session 40. Apr 16 02:22:53.808126 kubelet[2905]: E0416 02:22:53.805671 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:22:54.523748 systemd[1]: Started sshd@40-10.0.0.34:22-10.0.0.1:43976.service - OpenSSH per-connection server daemon (10.0.0.1:43976). Apr 16 02:22:55.005646 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 43976 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:22:55.020678 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:22:55.283270 systemd-logind[1559]: New session 41 of user core. Apr 16 02:22:55.330892 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 16 02:22:59.918405 sshd[5387]: Connection closed by 10.0.0.1 port 43976 Apr 16 02:22:59.925198 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Apr 16 02:23:00.042983 systemd[1]: sshd@40-10.0.0.34:22-10.0.0.1:43976.service: Deactivated successfully. Apr 16 02:23:00.176715 systemd[1]: session-41.scope: Deactivated successfully. Apr 16 02:23:00.182060 systemd[1]: session-41.scope: Consumed 3.345s CPU time, 16.2M memory peak. Apr 16 02:23:00.201053 systemd-logind[1559]: Session 41 logged out. Waiting for processes to exit. Apr 16 02:23:00.220510 systemd-logind[1559]: Removed session 41. Apr 16 02:23:05.339158 systemd[1]: Started sshd@41-10.0.0.34:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Apr 16 02:23:06.490864 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:23:06.513483 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:23:06.820982 systemd-logind[1559]: New session 42 of user core. Apr 16 02:23:07.028854 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 16 02:23:11.306454 kubelet[2905]: E0416 02:23:11.306125 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.206s" Apr 16 02:23:13.928479 sshd[5407]: Connection closed by 10.0.0.1 port 43904 Apr 16 02:23:13.943705 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Apr 16 02:23:14.046760 systemd[1]: sshd@41-10.0.0.34:22-10.0.0.1:43904.service: Deactivated successfully. Apr 16 02:23:14.217341 systemd[1]: session-42.scope: Deactivated successfully. Apr 16 02:23:14.227515 systemd[1]: session-42.scope: Consumed 4.712s CPU time, 17.2M memory peak. Apr 16 02:23:14.288992 systemd-logind[1559]: Session 42 logged out. Waiting for processes to exit. Apr 16 02:23:14.359219 systemd-logind[1559]: Removed session 42. Apr 16 02:23:18.294124 kubelet[2905]: E0416 02:23:18.293255 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:19.475883 systemd[1]: Started sshd@42-10.0.0.34:22-10.0.0.1:60806.service - OpenSSH per-connection server daemon (10.0.0.1:60806). Apr 16 02:23:20.093116 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 60806 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:23:20.104018 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:23:20.384191 systemd-logind[1559]: New session 43 of user core. Apr 16 02:23:20.428485 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 16 02:23:21.199106 kubelet[2905]: E0416 02:23:21.189067 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.094s" Apr 16 02:23:25.006041 kubelet[2905]: E0416 02:23:25.000317 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:30.287313 sshd[5427]: Connection closed by 10.0.0.1 port 60806 Apr 16 02:23:30.286174 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Apr 16 02:23:30.359523 systemd[1]: sshd@42-10.0.0.34:22-10.0.0.1:60806.service: Deactivated successfully. Apr 16 02:23:30.455248 systemd[1]: session-43.scope: Deactivated successfully. Apr 16 02:23:30.455826 systemd[1]: session-43.scope: Consumed 6.659s CPU time, 17.8M memory peak. Apr 16 02:23:30.484301 systemd-logind[1559]: Session 43 logged out. Waiting for processes to exit. Apr 16 02:23:30.536501 systemd-logind[1559]: Removed session 43. Apr 16 02:23:30.907314 systemd[1]: cri-containerd-e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77.scope: Deactivated successfully. Apr 16 02:23:30.913397 containerd[1572]: time="2026-04-16T02:23:30.913196717Z" level=info msg="received container exit event container_id:\"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\" id:\"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\" pid:5345 exit_status:1 exited_at:{seconds:1776306210 nanos:907389393}" Apr 16 02:23:30.922710 systemd[1]: cri-containerd-e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77.scope: Consumed 7.071s CPU time, 32.8M memory peak, 2.6M read from disk. Apr 16 02:23:32.457027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77-rootfs.mount: Deactivated successfully. Apr 16 02:23:32.730386 kubelet[2905]: I0416 02:23:32.728255 2905 scope.go:117] "RemoveContainer" containerID="6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85" Apr 16 02:23:32.801092 kubelet[2905]: I0416 02:23:32.800475 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:23:32.857936 kubelet[2905]: E0416 02:23:32.857044 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:32.878427 kubelet[2905]: E0416 02:23:32.877769 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:23:32.964944 containerd[1572]: time="2026-04-16T02:23:32.963424719Z" level=info msg="RemoveContainer for \"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\"" Apr 16 02:23:33.029666 containerd[1572]: time="2026-04-16T02:23:33.029092704Z" level=info msg="RemoveContainer for \"6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85\" returns successfully" Apr 16 02:23:33.135856 systemd[1]: cri-containerd-096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd.scope: Deactivated successfully. Apr 16 02:23:33.141818 systemd[1]: cri-containerd-096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd.scope: Consumed 4.599s CPU time, 23.2M memory peak, 1008K read from disk. Apr 16 02:23:33.167665 containerd[1572]: time="2026-04-16T02:23:33.167381289Z" level=info msg="received container exit event container_id:\"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\" id:\"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\" pid:5275 exit_status:1 exited_at:{seconds:1776306213 nanos:161045073}" Apr 16 02:23:34.089872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd-rootfs.mount: Deactivated successfully. Apr 16 02:23:34.627456 kubelet[2905]: I0416 02:23:34.627095 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:23:34.644140 kubelet[2905]: E0416 02:23:34.638432 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:34.677477 kubelet[2905]: E0416 02:23:34.677127 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:23:35.550018 kubelet[2905]: E0416 02:23:35.547492 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:35.642094 systemd[1]: Started sshd@43-10.0.0.34:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). Apr 16 02:23:37.200998 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:23:37.240276 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:23:37.633387 systemd-logind[1559]: New session 44 of user core. Apr 16 02:23:37.772900 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 16 02:23:38.442585 kubelet[2905]: E0416 02:23:38.429300 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.289s" Apr 16 02:23:38.600156 kubelet[2905]: I0416 02:23:38.599436 2905 scope.go:117] "RemoveContainer" containerID="e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98" Apr 16 02:23:38.732698 kubelet[2905]: E0416 02:23:38.730508 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:38.739005 kubelet[2905]: I0416 02:23:38.738678 2905 scope.go:117] "RemoveContainer" containerID="096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" Apr 16 02:23:38.739005 kubelet[2905]: E0416 02:23:38.738970 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:38.739614 kubelet[2905]: E0416 02:23:38.739327 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:23:39.009070 containerd[1572]: time="2026-04-16T02:23:39.005836398Z" level=info msg="RemoveContainer for \"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\"" Apr 16 02:23:39.110930 containerd[1572]: time="2026-04-16T02:23:39.110054860Z" level=info msg="RemoveContainer for \"e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98\" returns successfully" Apr 16 02:23:45.190675 sshd[5470]: Connection closed by 10.0.0.1 port 37378 Apr 16 02:23:45.194897 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Apr 16 02:23:45.281793 systemd[1]: sshd@43-10.0.0.34:22-10.0.0.1:37378.service: Deactivated successfully. Apr 16 02:23:45.355188 systemd[1]: session-44.scope: Deactivated successfully. Apr 16 02:23:45.356478 systemd[1]: session-44.scope: Consumed 5.260s CPU time, 16.5M memory peak. Apr 16 02:23:45.369179 systemd-logind[1559]: Session 44 logged out. Waiting for processes to exit. Apr 16 02:23:45.412205 systemd-logind[1559]: Removed session 44. Apr 16 02:23:47.944091 kubelet[2905]: I0416 02:23:47.939964 2905 scope.go:117] "RemoveContainer" containerID="096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" Apr 16 02:23:47.944091 kubelet[2905]: E0416 02:23:47.944190 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:47.958346 kubelet[2905]: E0416 02:23:47.944542 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:23:48.150821 kubelet[2905]: I0416 02:23:48.148511 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:23:48.162875 kubelet[2905]: E0416 02:23:48.160041 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:48.168248 kubelet[2905]: E0416 02:23:48.167833 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:23:49.290114 kubelet[2905]: E0416 02:23:49.289392 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:23:50.621439 systemd[1]: Started sshd@44-10.0.0.34:22-10.0.0.1:33234.service - OpenSSH per-connection server daemon (10.0.0.1:33234). Apr 16 02:23:52.016802 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 33234 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:23:52.124481 sshd-session[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:23:52.355060 systemd-logind[1559]: New session 45 of user core. Apr 16 02:23:52.480953 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 16 02:23:57.029119 sshd[5488]: Connection closed by 10.0.0.1 port 33234 Apr 16 02:23:57.040594 sshd-session[5485]: pam_unix(sshd:session): session closed for user core Apr 16 02:23:57.191866 systemd[1]: sshd@44-10.0.0.34:22-10.0.0.1:33234.service: Deactivated successfully. Apr 16 02:23:57.387915 systemd[1]: session-45.scope: Deactivated successfully. Apr 16 02:23:57.406626 systemd[1]: session-45.scope: Consumed 3.176s CPU time, 16.5M memory peak. Apr 16 02:23:57.453824 systemd-logind[1559]: Session 45 logged out. Waiting for processes to exit. Apr 16 02:23:57.465901 systemd-logind[1559]: Removed session 45. Apr 16 02:23:59.132122 kubelet[2905]: E0416 02:23:59.131104 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:00.261449 kubelet[2905]: I0416 02:24:00.260465 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:24:00.287855 kubelet[2905]: E0416 02:24:00.261771 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:00.287855 kubelet[2905]: E0416 02:24:00.262306 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:24:01.427703 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 02:24:02.197264 systemd-tmpfiles[5503]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 02:24:02.204914 systemd-tmpfiles[5503]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 02:24:02.207468 systemd-tmpfiles[5503]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 02:24:02.241869 systemd-tmpfiles[5503]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 02:24:02.260703 systemd-tmpfiles[5503]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 02:24:02.265341 systemd-tmpfiles[5503]: ACLs are not supported, ignoring. Apr 16 02:24:02.265377 systemd-tmpfiles[5503]: ACLs are not supported, ignoring. Apr 16 02:24:02.287044 systemd[1]: Started sshd@45-10.0.0.34:22-10.0.0.1:39896.service - OpenSSH per-connection server daemon (10.0.0.1:39896). Apr 16 02:24:02.337773 systemd-tmpfiles[5503]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:24:02.338674 systemd-tmpfiles[5503]: Skipping /boot Apr 16 02:24:02.447515 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 02:24:02.457776 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 02:24:02.513662 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Apr 16 02:24:02.704919 kubelet[2905]: I0416 02:24:02.703073 2905 scope.go:117] "RemoveContainer" containerID="096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" Apr 16 02:24:02.712185 kubelet[2905]: E0416 02:24:02.710517 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:02.780079 kubelet[2905]: E0416 02:24:02.778747 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:24:03.573974 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 39896 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:24:03.615518 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:24:04.088164 systemd-logind[1559]: New session 46 of user core. Apr 16 02:24:04.218085 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 16 02:24:10.124380 kubelet[2905]: E0416 02:24:10.123878 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:11.189869 kubelet[2905]: I0416 02:24:11.169246 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:24:11.242359 kubelet[2905]: E0416 02:24:11.241064 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:11.283809 kubelet[2905]: E0416 02:24:11.282965 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:24:15.817449 sshd[5510]: Connection closed by 10.0.0.1 port 39896 Apr 16 02:24:15.908887 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Apr 16 02:24:15.998930 systemd[1]: sshd@45-10.0.0.34:22-10.0.0.1:39896.service: Deactivated successfully. Apr 16 02:24:16.122917 systemd[1]: session-46.scope: Deactivated successfully. Apr 16 02:24:16.126124 systemd[1]: session-46.scope: Consumed 8.741s CPU time, 17.7M memory peak. Apr 16 02:24:16.182290 systemd-logind[1559]: Session 46 logged out. Waiting for processes to exit. Apr 16 02:24:16.202180 systemd-logind[1559]: Removed session 46. Apr 16 02:24:18.101060 kubelet[2905]: I0416 02:24:18.099770 2905 scope.go:117] "RemoveContainer" containerID="096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" Apr 16 02:24:18.115831 kubelet[2905]: E0416 02:24:18.108812 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:18.247355 containerd[1572]: time="2026-04-16T02:24:18.244485742Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 16 02:24:18.455860 containerd[1572]: time="2026-04-16T02:24:18.425506556Z" level=info msg="Container 76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:24:18.551843 containerd[1572]: time="2026-04-16T02:24:18.551642596Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0\"" Apr 16 02:24:18.577893 containerd[1572]: time="2026-04-16T02:24:18.573986246Z" level=info msg="StartContainer for \"76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0\"" Apr 16 02:24:18.636729 containerd[1572]: time="2026-04-16T02:24:18.636464338Z" level=info msg="connecting to shim 76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:24:20.539281 systemd[1]: Started cri-containerd-76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0.scope - libcontainer container 76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0. Apr 16 02:24:21.239084 systemd[1]: Started sshd@46-10.0.0.34:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). Apr 16 02:24:22.779258 containerd[1572]: time="2026-04-16T02:24:22.778165971Z" level=error msg="get state for 76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" error="context deadline exceeded" Apr 16 02:24:22.787901 containerd[1572]: time="2026-04-16T02:24:22.786078459Z" level=warning msg="unknown status" status=0 Apr 16 02:24:23.224626 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:24:23.281970 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:24:23.578692 systemd-logind[1559]: New session 47 of user core. Apr 16 02:24:23.781219 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 16 02:24:24.501760 kubelet[2905]: I0416 02:24:24.496402 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:24:24.501760 kubelet[2905]: E0416 02:24:24.496968 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:24.501760 kubelet[2905]: E0416 02:24:24.497230 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:24:24.928997 containerd[1572]: time="2026-04-16T02:24:24.921827948Z" level=error msg="get state for 76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" error="context deadline exceeded" Apr 16 02:24:24.928997 containerd[1572]: time="2026-04-16T02:24:24.921961663Z" level=warning msg="unknown status" status=0 Apr 16 02:24:25.206092 containerd[1572]: time="2026-04-16T02:24:25.196523942Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 02:24:25.206092 containerd[1572]: time="2026-04-16T02:24:25.200377833Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:24:25.859802 containerd[1572]: time="2026-04-16T02:24:25.859248944Z" level=info msg="StartContainer for \"76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0\" returns successfully" Apr 16 02:24:27.249803 kubelet[2905]: E0416 02:24:27.249086 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:28.354881 kubelet[2905]: E0416 02:24:28.354197 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:29.207737 sshd[5551]: Connection closed by 10.0.0.1 port 54830 Apr 16 02:24:29.210914 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Apr 16 02:24:29.342887 systemd[1]: sshd@46-10.0.0.34:22-10.0.0.1:54830.service: Deactivated successfully. Apr 16 02:24:29.429259 systemd[1]: session-47.scope: Deactivated successfully. Apr 16 02:24:29.489293 systemd[1]: session-47.scope: Consumed 3.527s CPU time, 17.6M memory peak. Apr 16 02:24:29.518886 systemd-logind[1559]: Session 47 logged out. Waiting for processes to exit. Apr 16 02:24:29.548971 systemd-logind[1559]: Removed session 47. Apr 16 02:24:33.036466 kubelet[2905]: E0416 02:24:33.035312 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:33.120057 kubelet[2905]: E0416 02:24:33.119508 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:34.477260 systemd[1]: Started sshd@47-10.0.0.34:22-10.0.0.1:42782.service - OpenSSH per-connection server daemon (10.0.0.1:42782). Apr 16 02:24:35.708369 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 42782 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:24:35.739718 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:24:36.091068 systemd-logind[1559]: New session 48 of user core. Apr 16 02:24:36.146377 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 16 02:24:37.108964 kubelet[2905]: I0416 02:24:37.108477 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:24:37.130854 kubelet[2905]: E0416 02:24:37.129233 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:37.134764 kubelet[2905]: E0416 02:24:37.134261 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:24:38.203175 kubelet[2905]: E0416 02:24:38.202775 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:41.234359 kubelet[2905]: E0416 02:24:41.233078 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.151s" Apr 16 02:24:43.293520 kubelet[2905]: E0416 02:24:43.284069 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:43.879970 kubelet[2905]: E0416 02:24:43.875900 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:44.681091 sshd[5585]: Connection closed by 10.0.0.1 port 42782 Apr 16 02:24:44.709298 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Apr 16 02:24:44.888932 systemd[1]: sshd@47-10.0.0.34:22-10.0.0.1:42782.service: Deactivated successfully. Apr 16 02:24:45.021530 systemd[1]: session-48.scope: Deactivated successfully. Apr 16 02:24:45.030888 systemd[1]: session-48.scope: Consumed 5.437s CPU time, 15.7M memory peak. Apr 16 02:24:45.106054 systemd-logind[1559]: Session 48 logged out. Waiting for processes to exit. Apr 16 02:24:45.212128 systemd-logind[1559]: Removed session 48. Apr 16 02:24:50.081445 kubelet[2905]: I0416 02:24:50.081135 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:24:50.089640 kubelet[2905]: E0416 02:24:50.088207 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:50.098755 kubelet[2905]: E0416 02:24:50.093504 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:24:50.096280 systemd[1]: Started sshd@48-10.0.0.34:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Apr 16 02:24:51.618117 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:24:51.660527 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:24:51.928483 systemd-logind[1559]: New session 49 of user core. Apr 16 02:24:52.033104 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 16 02:24:52.204650 kubelet[2905]: E0416 02:24:52.195210 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:24:54.206017 kubelet[2905]: E0416 02:24:54.205698 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:01.072231 containerd[1572]: time="2026-04-16T02:25:01.067531773Z" level=warning msg="container event discarded" container=bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90 type=CONTAINER_STOPPED_EVENT Apr 16 02:25:01.344118 sshd[5607]: Connection closed by 10.0.0.1 port 51004 Apr 16 02:25:01.396357 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Apr 16 02:25:01.532434 systemd[1]: sshd@48-10.0.0.34:22-10.0.0.1:51004.service: Deactivated successfully. Apr 16 02:25:01.642166 systemd[1]: session-49.scope: Deactivated successfully. Apr 16 02:25:01.649002 systemd[1]: session-49.scope: Consumed 5.507s CPU time, 15.7M memory peak. Apr 16 02:25:01.686347 systemd-logind[1559]: Session 49 logged out. Waiting for processes to exit. Apr 16 02:25:01.725050 systemd-logind[1559]: Removed session 49. Apr 16 02:25:02.028203 containerd[1572]: time="2026-04-16T02:25:02.026485536Z" level=warning msg="container event discarded" container=b9e0161d512147e664ceb50a64df88e50dfa5d739a000701d5076c8d70fae01d type=CONTAINER_DELETED_EVENT Apr 16 02:25:04.229820 containerd[1572]: time="2026-04-16T02:25:04.227185101Z" level=warning msg="container event discarded" container=5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4 type=CONTAINER_STOPPED_EVENT Apr 16 02:25:05.109515 kubelet[2905]: I0416 02:25:05.107201 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:25:05.161592 kubelet[2905]: E0416 02:25:05.160840 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:05.238654 containerd[1572]: time="2026-04-16T02:25:05.237308816Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:6,}" Apr 16 02:25:05.341777 containerd[1572]: time="2026-04-16T02:25:05.338902618Z" level=info msg="Container 279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:25:05.577706 containerd[1572]: time="2026-04-16T02:25:05.577436700Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:6,} returns container id \"279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe\"" Apr 16 02:25:05.607190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175604741.mount: Deactivated successfully. Apr 16 02:25:05.698740 containerd[1572]: time="2026-04-16T02:25:05.697471309Z" level=info msg="StartContainer for \"279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe\"" Apr 16 02:25:05.709124 containerd[1572]: time="2026-04-16T02:25:05.708905237Z" level=info msg="connecting to shim 279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:25:06.709017 systemd[1]: Started sshd@49-10.0.0.34:22-10.0.0.1:48992.service - OpenSSH per-connection server daemon (10.0.0.1:48992). Apr 16 02:25:06.889391 systemd[1]: Started cri-containerd-279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe.scope - libcontainer container 279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe. Apr 16 02:25:07.098952 containerd[1572]: time="2026-04-16T02:25:07.096123827Z" level=warning msg="container event discarded" container=a500918002e736f490697b989b5dc53af78e80ad98d850123d7414e4d6a2f590 type=CONTAINER_DELETED_EVENT Apr 16 02:25:08.435681 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 48992 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:25:08.511467 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:25:08.887442 systemd-logind[1559]: New session 50 of user core. Apr 16 02:25:09.015965 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 16 02:25:09.803449 containerd[1572]: time="2026-04-16T02:25:09.803019586Z" level=info msg="StartContainer for \"279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe\" returns successfully" Apr 16 02:25:11.355026 kubelet[2905]: E0416 02:25:11.353591 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:13.225187 kubelet[2905]: E0416 02:25:13.223344 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:13.699206 kubelet[2905]: E0416 02:25:13.698973 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:16.219224 sshd[5648]: Connection closed by 10.0.0.1 port 48992 Apr 16 02:25:16.235133 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Apr 16 02:25:16.317775 systemd[1]: sshd@49-10.0.0.34:22-10.0.0.1:48992.service: Deactivated successfully. Apr 16 02:25:16.439068 systemd[1]: session-50.scope: Deactivated successfully. Apr 16 02:25:16.440820 systemd[1]: session-50.scope: Consumed 4.483s CPU time, 15.8M memory peak. Apr 16 02:25:16.464057 systemd-logind[1559]: Session 50 logged out. Waiting for processes to exit. Apr 16 02:25:16.489109 systemd-logind[1559]: Removed session 50. Apr 16 02:25:17.515514 systemd[1]: cri-containerd-76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0.scope: Deactivated successfully. Apr 16 02:25:17.518807 systemd[1]: cri-containerd-76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0.scope: Consumed 7.699s CPU time, 21.7M memory peak, 544K read from disk. Apr 16 02:25:17.534796 containerd[1572]: time="2026-04-16T02:25:17.533305261Z" level=info msg="received container exit event container_id:\"76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0\" id:\"76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0\" pid:5539 exit_status:1 exited_at:{seconds:1776306317 nanos:526930295}" Apr 16 02:25:18.985197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0-rootfs.mount: Deactivated successfully. Apr 16 02:25:19.814834 kubelet[2905]: I0416 02:25:19.814244 2905 scope.go:117] "RemoveContainer" containerID="096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd" Apr 16 02:25:19.890122 kubelet[2905]: I0416 02:25:19.889803 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:25:19.890122 kubelet[2905]: E0416 02:25:19.890163 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:19.915111 kubelet[2905]: E0416 02:25:19.890411 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:25:19.936046 containerd[1572]: time="2026-04-16T02:25:19.935491947Z" level=info msg="RemoveContainer for \"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\"" Apr 16 02:25:20.006962 containerd[1572]: time="2026-04-16T02:25:20.006469860Z" level=info msg="RemoveContainer for \"096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd\" returns successfully" Apr 16 02:25:20.681050 containerd[1572]: time="2026-04-16T02:25:20.680154616Z" level=warning msg="container event discarded" container=e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98 type=CONTAINER_CREATED_EVENT Apr 16 02:25:20.996038 kubelet[2905]: I0416 02:25:20.995158 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:25:21.038017 kubelet[2905]: E0416 02:25:21.035511 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:21.047863 kubelet[2905]: E0416 02:25:21.047644 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:25:21.689134 systemd[1]: Started sshd@50-10.0.0.34:22-10.0.0.1:59630.service - OpenSSH per-connection server daemon (10.0.0.1:59630). Apr 16 02:25:22.104122 kubelet[2905]: E0416 02:25:22.104008 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:23.044102 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 59630 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:25:23.081032 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:25:23.394350 systemd-logind[1559]: New session 51 of user core. Apr 16 02:25:23.541328 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 16 02:25:23.931988 kubelet[2905]: E0416 02:25:23.930989 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:27.123428 containerd[1572]: time="2026-04-16T02:25:27.122966501Z" level=warning msg="container event discarded" container=e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98 type=CONTAINER_STARTED_EVENT Apr 16 02:25:28.003504 kubelet[2905]: I0416 02:25:28.001362 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:25:28.018143 kubelet[2905]: E0416 02:25:28.008150 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:28.026133 kubelet[2905]: E0416 02:25:28.021689 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:25:30.276231 sshd[5694]: Connection closed by 10.0.0.1 port 59630 Apr 16 02:25:30.285094 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Apr 16 02:25:30.388395 systemd-logind[1559]: Session 51 logged out. Waiting for processes to exit. Apr 16 02:25:30.426433 systemd[1]: sshd@50-10.0.0.34:22-10.0.0.1:59630.service: Deactivated successfully. Apr 16 02:25:30.571356 systemd[1]: session-51.scope: Deactivated successfully. Apr 16 02:25:30.579122 systemd[1]: session-51.scope: Consumed 4.835s CPU time, 16.2M memory peak. Apr 16 02:25:30.707008 systemd-logind[1559]: Removed session 51. Apr 16 02:25:31.437618 containerd[1572]: time="2026-04-16T02:25:31.433431869Z" level=warning msg="container event discarded" container=6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85 type=CONTAINER_CREATED_EVENT Apr 16 02:25:33.656090 containerd[1572]: time="2026-04-16T02:25:33.654866803Z" level=warning msg="container event discarded" container=6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85 type=CONTAINER_STARTED_EVENT Apr 16 02:25:35.821992 systemd[1]: Started sshd@51-10.0.0.34:22-10.0.0.1:46566.service - OpenSSH per-connection server daemon (10.0.0.1:46566). Apr 16 02:25:37.663933 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:25:37.900377 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:25:38.314047 systemd-logind[1559]: New session 52 of user core. Apr 16 02:25:38.390434 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 16 02:25:43.184893 kubelet[2905]: I0416 02:25:43.178513 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:25:43.198967 kubelet[2905]: E0416 02:25:43.197939 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:43.220920 kubelet[2905]: E0416 02:25:43.215421 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:25:44.084848 sshd[5713]: Connection closed by 10.0.0.1 port 46566 Apr 16 02:25:44.107052 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Apr 16 02:25:44.269367 systemd[1]: sshd@51-10.0.0.34:22-10.0.0.1:46566.service: Deactivated successfully. Apr 16 02:25:44.398390 systemd[1]: session-52.scope: Deactivated successfully. Apr 16 02:25:44.404467 systemd[1]: session-52.scope: Consumed 3.954s CPU time, 17.9M memory peak. Apr 16 02:25:44.449702 systemd-logind[1559]: Session 52 logged out. Waiting for processes to exit. Apr 16 02:25:44.571304 systemd-logind[1559]: Removed session 52. Apr 16 02:25:47.198940 kubelet[2905]: E0416 02:25:47.197528 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:49.619954 systemd[1]: Started sshd@52-10.0.0.34:22-10.0.0.1:37000.service - OpenSSH per-connection server daemon (10.0.0.1:37000). Apr 16 02:25:51.342620 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 37000 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:25:51.359344 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:25:51.416850 kubelet[2905]: E0416 02:25:51.414915 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.247s" Apr 16 02:25:51.601911 systemd-logind[1559]: New session 53 of user core. Apr 16 02:25:51.872480 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 16 02:25:55.136753 kubelet[2905]: E0416 02:25:55.105167 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:56.082186 kubelet[2905]: E0416 02:25:56.081838 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:56.960164 sshd[5731]: Connection closed by 10.0.0.1 port 37000 Apr 16 02:25:56.967325 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Apr 16 02:25:57.042314 systemd[1]: sshd@52-10.0.0.34:22-10.0.0.1:37000.service: Deactivated successfully. Apr 16 02:25:57.168229 kubelet[2905]: I0416 02:25:57.166919 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:25:57.174096 systemd[1]: session-53.scope: Deactivated successfully. Apr 16 02:25:57.176331 systemd[1]: session-53.scope: Consumed 3.988s CPU time, 17.7M memory peak. Apr 16 02:25:57.189895 kubelet[2905]: E0416 02:25:57.172156 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:25:57.189895 kubelet[2905]: E0416 02:25:57.189829 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:25:57.199043 systemd-logind[1559]: Session 53 logged out. Waiting for processes to exit. Apr 16 02:25:57.310090 systemd-logind[1559]: Removed session 53. Apr 16 02:26:02.272792 systemd[1]: Started sshd@53-10.0.0.34:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Apr 16 02:26:03.129459 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:26:03.161287 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:26:03.522799 systemd-logind[1559]: New session 54 of user core. Apr 16 02:26:03.684910 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 16 02:26:08.005862 sshd[5749]: Connection closed by 10.0.0.1 port 39370 Apr 16 02:26:08.023216 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Apr 16 02:26:08.112465 kubelet[2905]: I0416 02:26:08.111931 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:26:08.126324 kubelet[2905]: E0416 02:26:08.120112 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:08.135444 kubelet[2905]: E0416 02:26:08.135046 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:26:08.143422 systemd[1]: sshd@53-10.0.0.34:22-10.0.0.1:39370.service: Deactivated successfully. Apr 16 02:26:08.216512 systemd[1]: session-54.scope: Deactivated successfully. Apr 16 02:26:08.221106 systemd[1]: session-54.scope: Consumed 2.618s CPU time, 17.7M memory peak. Apr 16 02:26:08.280828 systemd-logind[1559]: Session 54 logged out. Waiting for processes to exit. Apr 16 02:26:08.303895 systemd-logind[1559]: Removed session 54. Apr 16 02:26:13.245003 systemd[1]: Started sshd@54-10.0.0.34:22-10.0.0.1:41820.service - OpenSSH per-connection server daemon (10.0.0.1:41820). Apr 16 02:26:13.663818 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 41820 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:26:13.647841 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:26:13.906334 systemd-logind[1559]: New session 55 of user core. Apr 16 02:26:14.007093 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 16 02:26:18.928907 sshd[5773]: Connection closed by 10.0.0.1 port 41820 Apr 16 02:26:18.964148 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Apr 16 02:26:19.234427 systemd[1]: sshd@54-10.0.0.34:22-10.0.0.1:41820.service: Deactivated successfully. Apr 16 02:26:19.332959 systemd[1]: session-55.scope: Deactivated successfully. Apr 16 02:26:19.337887 systemd[1]: session-55.scope: Consumed 3.532s CPU time, 16.1M memory peak. Apr 16 02:26:19.414805 systemd-logind[1559]: Session 55 logged out. Waiting for processes to exit. Apr 16 02:26:19.465086 systemd-logind[1559]: Removed session 55. Apr 16 02:26:19.592932 systemd[1]: Started sshd@55-10.0.0.34:22-10.0.0.1:59922.service - OpenSSH per-connection server daemon (10.0.0.1:59922). Apr 16 02:26:20.306829 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 59922 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:26:20.343253 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:26:20.645677 systemd-logind[1559]: New session 56 of user core. Apr 16 02:26:20.827045 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 16 02:26:21.103517 kubelet[2905]: I0416 02:26:21.100275 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:26:21.119300 kubelet[2905]: E0416 02:26:21.106672 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:21.119300 kubelet[2905]: E0416 02:26:21.115104 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:26:23.104705 kubelet[2905]: E0416 02:26:23.104040 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:23.112878 sshd[5789]: Connection closed by 10.0.0.1 port 59922 Apr 16 02:26:23.121538 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Apr 16 02:26:23.608168 systemd[1]: sshd@55-10.0.0.34:22-10.0.0.1:59922.service: Deactivated successfully. Apr 16 02:26:23.677090 systemd[1]: session-56.scope: Deactivated successfully. Apr 16 02:26:23.681269 systemd[1]: session-56.scope: Consumed 1.469s CPU time, 47.6M memory peak. Apr 16 02:26:23.746832 systemd-logind[1559]: Session 56 logged out. Waiting for processes to exit. Apr 16 02:26:23.860213 systemd[1]: Started sshd@56-10.0.0.34:22-10.0.0.1:59936.service - OpenSSH per-connection server daemon (10.0.0.1:59936). Apr 16 02:26:23.905514 systemd-logind[1559]: Removed session 56. Apr 16 02:26:24.900787 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 59936 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:26:25.012716 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:26:25.208989 systemd-logind[1559]: New session 57 of user core. Apr 16 02:26:25.350226 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 16 02:26:32.140999 kubelet[2905]: E0416 02:26:32.139849 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:33.134499 kubelet[2905]: I0416 02:26:33.133898 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:26:33.140035 kubelet[2905]: E0416 02:26:33.138249 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:33.169784 kubelet[2905]: E0416 02:26:33.168157 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 16 02:26:42.232058 kubelet[2905]: E0416 02:26:42.230126 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:46.097349 kubelet[2905]: E0416 02:26:46.096834 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:46.108917 kubelet[2905]: I0416 02:26:46.098917 2905 scope.go:117] "RemoveContainer" containerID="76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0" Apr 16 02:26:46.113739 kubelet[2905]: E0416 02:26:46.109746 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:46.239171 containerd[1572]: time="2026-04-16T02:26:46.238624768Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Apr 16 02:26:46.421112 containerd[1572]: time="2026-04-16T02:26:46.418443721Z" level=info msg="Container 44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:26:46.571395 containerd[1572]: time="2026-04-16T02:26:46.570832354Z" level=info msg="CreateContainer within sandbox \"d0d51dcff03504f1fef58134facfd4cd81ca116611e32da6c564cac072c5a2c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3\"" Apr 16 02:26:46.620266 containerd[1572]: time="2026-04-16T02:26:46.620057512Z" level=info msg="StartContainer for \"44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3\"" Apr 16 02:26:46.723856 containerd[1572]: time="2026-04-16T02:26:46.712241626Z" level=info msg="connecting to shim 44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3" address="unix:///run/containerd/s/5f74707208b0d02950181218f9914fc308cbc5438693fd3705e35aae6ffc62c0" protocol=ttrpc version=3 Apr 16 02:26:47.543405 systemd[1]: Started cri-containerd-44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3.scope - libcontainer container 44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3. Apr 16 02:26:48.520938 containerd[1572]: time="2026-04-16T02:26:48.515425913Z" level=info msg="StartContainer for \"44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3\" returns successfully" Apr 16 02:26:49.690756 kubelet[2905]: E0416 02:26:49.689121 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:50.061388 containerd[1572]: time="2026-04-16T02:26:50.058065850Z" level=warning msg="container event discarded" container=394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e type=CONTAINER_CREATED_EVENT Apr 16 02:26:50.061388 containerd[1572]: time="2026-04-16T02:26:50.061179506Z" level=warning msg="container event discarded" container=394b95486e1dd6939c3261152e91a35c63f9a6fb6145b7f4b695a46aabff7b7e type=CONTAINER_STARTED_EVENT Apr 16 02:26:50.755737 kubelet[2905]: E0416 02:26:50.755341 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:51.716379 containerd[1572]: time="2026-04-16T02:26:51.715517100Z" level=warning msg="container event discarded" container=fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6 type=CONTAINER_CREATED_EVENT Apr 16 02:26:52.243146 containerd[1572]: time="2026-04-16T02:26:52.240541719Z" level=warning msg="container event discarded" container=6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85 type=CONTAINER_STOPPED_EVENT Apr 16 02:26:52.965693 kubelet[2905]: E0416 02:26:52.965212 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:53.517814 sshd[5804]: Connection closed by 10.0.0.1 port 59936 Apr 16 02:26:53.520997 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Apr 16 02:26:53.594324 systemd[1]: sshd@56-10.0.0.34:22-10.0.0.1:59936.service: Deactivated successfully. Apr 16 02:26:53.713787 systemd[1]: session-57.scope: Deactivated successfully. Apr 16 02:26:53.715443 systemd[1]: session-57.scope: Consumed 7.992s CPU time, 47.5M memory peak. Apr 16 02:26:53.738918 systemd-logind[1559]: Session 57 logged out. Waiting for processes to exit. Apr 16 02:26:53.793614 systemd[1]: Started sshd@57-10.0.0.34:22-10.0.0.1:44164.service - OpenSSH per-connection server daemon (10.0.0.1:44164). Apr 16 02:26:53.797066 systemd-logind[1559]: Removed session 57. Apr 16 02:26:53.848372 containerd[1572]: time="2026-04-16T02:26:53.848070041Z" level=warning msg="container event discarded" container=bd002fe003f5d335b435fee674815708c1b8bae785f3d3d72bd927fe5d1bec90 type=CONTAINER_DELETED_EVENT Apr 16 02:26:54.026874 containerd[1572]: time="2026-04-16T02:26:54.026822636Z" level=warning msg="container event discarded" container=55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010 type=CONTAINER_CREATED_EVENT Apr 16 02:26:54.035072 containerd[1572]: time="2026-04-16T02:26:54.034047733Z" level=warning msg="container event discarded" container=55fef7e28303f956860797a3b84004b8e080f0125bab9a211c2804f9ec8b2010 type=CONTAINER_STARTED_EVENT Apr 16 02:26:54.143447 kubelet[2905]: E0416 02:26:54.132740 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:26:54.494053 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 44164 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:26:54.507197 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:26:54.587135 systemd-logind[1559]: New session 58 of user core. Apr 16 02:26:54.657412 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 16 02:26:54.928301 containerd[1572]: time="2026-04-16T02:26:54.925003119Z" level=warning msg="container event discarded" container=2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e type=CONTAINER_CREATED_EVENT Apr 16 02:26:57.208098 containerd[1572]: time="2026-04-16T02:26:57.207599935Z" level=warning msg="container event discarded" container=fa24e93c2ad72eb13aca0ce2520b5ea34ef46dd3bacd96c346590f865883d2c6 type=CONTAINER_STARTED_EVENT Apr 16 02:26:58.405624 containerd[1572]: time="2026-04-16T02:26:58.404738864Z" level=warning msg="container event discarded" container=2e71e5f7311aceb0f7a739d41b70dad069209dcea2a55124e9375d049f06966e type=CONTAINER_STARTED_EVENT Apr 16 02:26:59.027168 containerd[1572]: time="2026-04-16T02:26:59.026671489Z" level=warning msg="container event discarded" container=e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98 type=CONTAINER_STOPPED_EVENT Apr 16 02:26:59.501769 containerd[1572]: time="2026-04-16T02:26:59.499501115Z" level=warning msg="container event discarded" container=5391962145a35b91fc9d2cfff401ee36897c28398a87135487a4f7266b6d9ef4 type=CONTAINER_DELETED_EVENT Apr 16 02:27:01.701081 sshd[5864]: Connection closed by 10.0.0.1 port 44164 Apr 16 02:27:01.711768 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:02.223178 systemd[1]: sshd@57-10.0.0.34:22-10.0.0.1:44164.service: Deactivated successfully. Apr 16 02:27:02.343447 systemd[1]: session-58.scope: Deactivated successfully. Apr 16 02:27:02.370121 systemd[1]: session-58.scope: Consumed 3.382s CPU time, 29.2M memory peak. Apr 16 02:27:02.425968 systemd-logind[1559]: Session 58 logged out. Waiting for processes to exit. Apr 16 02:27:02.598694 systemd[1]: Started sshd@58-10.0.0.34:22-10.0.0.1:41002.service - OpenSSH per-connection server daemon (10.0.0.1:41002). Apr 16 02:27:02.664792 systemd-logind[1559]: Removed session 58. Apr 16 02:27:03.438867 sshd[5879]: Accepted publickey for core from 10.0.0.1 port 41002 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:03.466064 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:03.485503 kubelet[2905]: E0416 02:27:03.485406 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:03.902474 systemd-logind[1559]: New session 59 of user core. Apr 16 02:27:04.116609 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 16 02:27:05.434004 kubelet[2905]: E0416 02:27:05.381196 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:05.726782 kubelet[2905]: E0416 02:27:05.722686 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.639s" Apr 16 02:27:08.824345 sshd[5882]: Connection closed by 10.0.0.1 port 41002 Apr 16 02:27:08.829339 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:08.985372 systemd[1]: sshd@58-10.0.0.34:22-10.0.0.1:41002.service: Deactivated successfully. Apr 16 02:27:09.052274 systemd[1]: session-59.scope: Deactivated successfully. Apr 16 02:27:09.053680 systemd[1]: session-59.scope: Consumed 2.467s CPU time, 18.1M memory peak. Apr 16 02:27:09.096951 systemd-logind[1559]: Session 59 logged out. Waiting for processes to exit. Apr 16 02:27:09.122306 systemd-logind[1559]: Removed session 59. Apr 16 02:27:14.402769 systemd[1]: Started sshd@59-10.0.0.34:22-10.0.0.1:49106.service - OpenSSH per-connection server daemon (10.0.0.1:49106). Apr 16 02:27:15.375347 sshd[5899]: Accepted publickey for core from 10.0.0.1 port 49106 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:15.489354 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:15.905058 systemd-logind[1559]: New session 60 of user core. Apr 16 02:27:16.121866 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 16 02:27:17.016446 kubelet[2905]: E0416 02:27:17.001405 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:20.539443 containerd[1572]: time="2026-04-16T02:27:20.532715613Z" level=warning msg="container event discarded" container=096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd type=CONTAINER_CREATED_EVENT Apr 16 02:27:21.136938 sshd[5902]: Connection closed by 10.0.0.1 port 49106 Apr 16 02:27:21.164137 sshd-session[5899]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:21.259109 systemd[1]: sshd@59-10.0.0.34:22-10.0.0.1:49106.service: Deactivated successfully. Apr 16 02:27:21.429276 systemd[1]: session-60.scope: Deactivated successfully. Apr 16 02:27:21.429897 systemd[1]: session-60.scope: Consumed 3.297s CPU time, 18M memory peak. Apr 16 02:27:21.500174 systemd-logind[1559]: Session 60 logged out. Waiting for processes to exit. Apr 16 02:27:21.598718 systemd-logind[1559]: Removed session 60. Apr 16 02:27:22.170171 kubelet[2905]: E0416 02:27:22.166194 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:23.154438 containerd[1572]: time="2026-04-16T02:27:23.153901926Z" level=warning msg="container event discarded" container=096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd type=CONTAINER_STARTED_EVENT Apr 16 02:27:26.167039 kubelet[2905]: E0416 02:27:26.153391 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:26.630413 systemd[1]: Started sshd@60-10.0.0.34:22-10.0.0.1:45364.service - OpenSSH per-connection server daemon (10.0.0.1:45364). Apr 16 02:27:27.724377 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 45364 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:27.830879 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:28.064002 systemd-logind[1559]: New session 61 of user core. Apr 16 02:27:28.118503 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 16 02:27:31.641974 sshd[5919]: Connection closed by 10.0.0.1 port 45364 Apr 16 02:27:31.659663 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:31.703439 systemd[1]: sshd@60-10.0.0.34:22-10.0.0.1:45364.service: Deactivated successfully. Apr 16 02:27:31.803098 systemd[1]: session-61.scope: Deactivated successfully. Apr 16 02:27:31.806369 systemd[1]: session-61.scope: Consumed 2.280s CPU time, 15.9M memory peak. Apr 16 02:27:31.839873 systemd-logind[1559]: Session 61 logged out. Waiting for processes to exit. Apr 16 02:27:31.901095 systemd-logind[1559]: Removed session 61. Apr 16 02:27:37.029033 systemd[1]: Started sshd@61-10.0.0.34:22-10.0.0.1:54700.service - OpenSSH per-connection server daemon (10.0.0.1:54700). Apr 16 02:27:37.427920 kubelet[2905]: E0416 02:27:37.422835 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:37.701475 sshd[5933]: Accepted publickey for core from 10.0.0.1 port 54700 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:37.702001 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:37.813896 containerd[1572]: time="2026-04-16T02:27:37.813503060Z" level=warning msg="container event discarded" container=e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77 type=CONTAINER_CREATED_EVENT Apr 16 02:27:37.866717 systemd-logind[1559]: New session 62 of user core. Apr 16 02:27:37.888457 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 16 02:27:39.994693 containerd[1572]: time="2026-04-16T02:27:39.993801075Z" level=warning msg="container event discarded" container=e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77 type=CONTAINER_STARTED_EVENT Apr 16 02:27:42.577982 sshd[5936]: Connection closed by 10.0.0.1 port 54700 Apr 16 02:27:42.585532 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:42.720189 systemd[1]: sshd@61-10.0.0.34:22-10.0.0.1:54700.service: Deactivated successfully. Apr 16 02:27:42.822168 systemd[1]: session-62.scope: Deactivated successfully. Apr 16 02:27:42.861454 systemd[1]: session-62.scope: Consumed 2.443s CPU time, 17.7M memory peak. Apr 16 02:27:42.901058 systemd-logind[1559]: Session 62 logged out. Waiting for processes to exit. Apr 16 02:27:43.030264 systemd-logind[1559]: Removed session 62. Apr 16 02:27:48.149243 systemd[1]: Started sshd@62-10.0.0.34:22-10.0.0.1:53614.service - OpenSSH per-connection server daemon (10.0.0.1:53614). Apr 16 02:27:48.886793 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 53614 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:48.894400 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:48.936181 systemd-logind[1559]: New session 63 of user core. Apr 16 02:27:48.989109 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 16 02:27:52.716524 sshd[5955]: Connection closed by 10.0.0.1 port 53614 Apr 16 02:27:52.739234 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Apr 16 02:27:52.846462 systemd-logind[1559]: Session 63 logged out. Waiting for processes to exit. Apr 16 02:27:52.886607 systemd[1]: sshd@62-10.0.0.34:22-10.0.0.1:53614.service: Deactivated successfully. Apr 16 02:27:53.022034 systemd[1]: session-63.scope: Deactivated successfully. Apr 16 02:27:53.024161 systemd[1]: session-63.scope: Consumed 2.223s CPU time, 18M memory peak. Apr 16 02:27:53.106175 systemd-logind[1559]: Removed session 63. Apr 16 02:27:58.197189 systemd[1]: Started sshd@63-10.0.0.34:22-10.0.0.1:44898.service - OpenSSH per-connection server daemon (10.0.0.1:44898). Apr 16 02:27:59.018335 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 44898 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:27:59.090518 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:27:59.130089 kubelet[2905]: E0416 02:27:59.125753 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:27:59.234228 systemd-logind[1559]: New session 64 of user core. Apr 16 02:27:59.423973 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 16 02:28:03.212401 sshd[5974]: Connection closed by 10.0.0.1 port 44898 Apr 16 02:28:03.216696 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:03.312937 systemd[1]: sshd@63-10.0.0.34:22-10.0.0.1:44898.service: Deactivated successfully. Apr 16 02:28:03.498535 systemd[1]: session-64.scope: Deactivated successfully. Apr 16 02:28:03.500094 systemd[1]: session-64.scope: Consumed 2.080s CPU time, 15.7M memory peak. Apr 16 02:28:03.524535 systemd-logind[1559]: Session 64 logged out. Waiting for processes to exit. Apr 16 02:28:03.611808 systemd-logind[1559]: Removed session 64. Apr 16 02:28:05.114855 kubelet[2905]: E0416 02:28:05.114094 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:08.625007 systemd[1]: Started sshd@64-10.0.0.34:22-10.0.0.1:39432.service - OpenSSH per-connection server daemon (10.0.0.1:39432). Apr 16 02:28:09.224667 sshd[5988]: Accepted publickey for core from 10.0.0.1 port 39432 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:28:09.244743 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:28:09.489529 systemd-logind[1559]: New session 65 of user core. Apr 16 02:28:09.523268 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 16 02:28:12.244987 kubelet[2905]: E0416 02:28:12.243709 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:12.673085 sshd[5992]: Connection closed by 10.0.0.1 port 39432 Apr 16 02:28:12.682448 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:12.779994 systemd[1]: sshd@64-10.0.0.34:22-10.0.0.1:39432.service: Deactivated successfully. Apr 16 02:28:12.868069 systemd[1]: session-65.scope: Deactivated successfully. Apr 16 02:28:12.872525 systemd[1]: session-65.scope: Consumed 2.015s CPU time, 15.9M memory peak. Apr 16 02:28:12.897985 systemd-logind[1559]: Session 65 logged out. Waiting for processes to exit. Apr 16 02:28:12.936573 systemd-logind[1559]: Removed session 65. Apr 16 02:28:17.726373 systemd[1]: Started sshd@65-10.0.0.34:22-10.0.0.1:43536.service - OpenSSH per-connection server daemon (10.0.0.1:43536). Apr 16 02:28:17.992505 sshd[6007]: Accepted publickey for core from 10.0.0.1 port 43536 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:28:17.998708 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:28:18.124953 systemd-logind[1559]: New session 66 of user core. Apr 16 02:28:18.163991 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 16 02:28:19.985772 sshd[6010]: Connection closed by 10.0.0.1 port 43536 Apr 16 02:28:19.985209 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:20.040827 systemd[1]: sshd@65-10.0.0.34:22-10.0.0.1:43536.service: Deactivated successfully. Apr 16 02:28:20.071993 systemd[1]: session-66.scope: Deactivated successfully. Apr 16 02:28:20.099468 systemd-logind[1559]: Session 66 logged out. Waiting for processes to exit. Apr 16 02:28:20.143078 systemd-logind[1559]: Removed session 66. Apr 16 02:28:23.103458 kubelet[2905]: E0416 02:28:23.097957 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:25.418450 systemd[1]: Started sshd@66-10.0.0.34:22-10.0.0.1:43542.service - OpenSSH per-connection server daemon (10.0.0.1:43542). Apr 16 02:28:26.503132 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 43542 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:28:26.617900 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:28:26.895790 systemd-logind[1559]: New session 67 of user core. Apr 16 02:28:27.067518 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 16 02:28:31.237280 sshd[6027]: Connection closed by 10.0.0.1 port 43542 Apr 16 02:28:31.241141 sshd-session[6024]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:31.423545 systemd[1]: sshd@66-10.0.0.34:22-10.0.0.1:43542.service: Deactivated successfully. Apr 16 02:28:31.729168 systemd[1]: session-67.scope: Deactivated successfully. Apr 16 02:28:31.732981 systemd[1]: session-67.scope: Consumed 2.701s CPU time, 17.8M memory peak. Apr 16 02:28:31.748389 systemd-logind[1559]: Session 67 logged out. Waiting for processes to exit. Apr 16 02:28:31.761673 systemd-logind[1559]: Removed session 67. Apr 16 02:28:32.383884 containerd[1572]: time="2026-04-16T02:28:32.383244028Z" level=warning msg="container event discarded" container=e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77 type=CONTAINER_STOPPED_EVENT Apr 16 02:28:33.040245 containerd[1572]: time="2026-04-16T02:28:33.039405677Z" level=warning msg="container event discarded" container=6cd8ea8b1828ddf365a7e145e953576ff5fe25c533c690464cbb0d909bdead85 type=CONTAINER_DELETED_EVENT Apr 16 02:28:34.048180 containerd[1572]: time="2026-04-16T02:28:34.046240652Z" level=warning msg="container event discarded" container=096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd type=CONTAINER_STOPPED_EVENT Apr 16 02:28:34.096939 kubelet[2905]: E0416 02:28:34.095996 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:36.678010 systemd[1]: Started sshd@67-10.0.0.34:22-10.0.0.1:54810.service - OpenSSH per-connection server daemon (10.0.0.1:54810). Apr 16 02:28:37.839201 sshd[6040]: Accepted publickey for core from 10.0.0.1 port 54810 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:28:37.873181 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:28:38.114097 systemd-logind[1559]: New session 68 of user core. Apr 16 02:28:38.228421 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 16 02:28:39.121107 containerd[1572]: time="2026-04-16T02:28:39.120788298Z" level=warning msg="container event discarded" container=e8e039019f754495d0b72b8e1637bf7d505614ba9281ab00f30abbab08a61b98 type=CONTAINER_DELETED_EVENT Apr 16 02:28:43.201923 kubelet[2905]: E0416 02:28:43.201206 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:43.532139 sshd[6043]: Connection closed by 10.0.0.1 port 54810 Apr 16 02:28:43.538924 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:43.679792 systemd[1]: sshd@67-10.0.0.34:22-10.0.0.1:54810.service: Deactivated successfully. Apr 16 02:28:43.820840 systemd[1]: session-68.scope: Deactivated successfully. Apr 16 02:28:43.825964 systemd[1]: session-68.scope: Consumed 3.177s CPU time, 15.7M memory peak. Apr 16 02:28:43.945223 systemd-logind[1559]: Session 68 logged out. Waiting for processes to exit. Apr 16 02:28:43.985950 systemd-logind[1559]: Removed session 68. Apr 16 02:28:46.121788 kubelet[2905]: E0416 02:28:46.120703 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:46.199532 kubelet[2905]: E0416 02:28:46.123131 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:28:49.057312 systemd[1]: Started sshd@68-10.0.0.34:22-10.0.0.1:44646.service - OpenSSH per-connection server daemon (10.0.0.1:44646). Apr 16 02:28:50.343026 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 44646 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:28:50.401970 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:28:50.622039 systemd-logind[1559]: New session 69 of user core. Apr 16 02:28:50.763315 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 16 02:28:56.014128 sshd[6061]: Connection closed by 10.0.0.1 port 44646 Apr 16 02:28:56.018942 sshd-session[6058]: pam_unix(sshd:session): session closed for user core Apr 16 02:28:56.126592 systemd[1]: sshd@68-10.0.0.34:22-10.0.0.1:44646.service: Deactivated successfully. Apr 16 02:28:56.309842 systemd[1]: session-69.scope: Deactivated successfully. Apr 16 02:28:56.315229 systemd[1]: session-69.scope: Consumed 3.228s CPU time, 15.6M memory peak. Apr 16 02:28:56.464464 systemd-logind[1559]: Session 69 logged out. Waiting for processes to exit. Apr 16 02:28:56.493339 systemd-logind[1559]: Removed session 69. Apr 16 02:29:01.487777 systemd[1]: Started sshd@69-10.0.0.34:22-10.0.0.1:40420.service - OpenSSH per-connection server daemon (10.0.0.1:40420). Apr 16 02:29:02.813815 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 40420 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:02.835495 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:03.108158 systemd-logind[1559]: New session 70 of user core. Apr 16 02:29:03.139473 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 16 02:29:10.168842 kubelet[2905]: E0416 02:29:10.167239 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:10.396445 sshd[6080]: Connection closed by 10.0.0.1 port 40420 Apr 16 02:29:10.397844 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:10.581235 systemd[1]: sshd@69-10.0.0.34:22-10.0.0.1:40420.service: Deactivated successfully. Apr 16 02:29:10.800505 systemd[1]: session-70.scope: Deactivated successfully. Apr 16 02:29:10.807841 systemd[1]: session-70.scope: Consumed 2.854s CPU time, 17.7M memory peak. Apr 16 02:29:10.927383 systemd-logind[1559]: Session 70 logged out. Waiting for processes to exit. Apr 16 02:29:11.004175 systemd-logind[1559]: Removed session 70. Apr 16 02:29:11.046028 systemd[1]: cri-containerd-279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe.scope: Deactivated successfully. Apr 16 02:29:11.090811 systemd[1]: cri-containerd-279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe.scope: Consumed 42.918s CPU time, 55M memory peak, 3.9M read from disk. Apr 16 02:29:11.131846 containerd[1572]: time="2026-04-16T02:29:11.131039911Z" level=info msg="received container exit event container_id:\"279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe\" id:\"279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe\" pid:5638 exit_status:1 exited_at:{seconds:1776306551 nanos:37202570}" Apr 16 02:29:11.201035 kubelet[2905]: E0416 02:29:11.185678 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 16 02:29:12.629532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe-rootfs.mount: Deactivated successfully. Apr 16 02:29:14.696049 kubelet[2905]: I0416 02:29:14.695676 2905 scope.go:117] "RemoveContainer" containerID="e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77" Apr 16 02:29:14.696049 kubelet[2905]: I0416 02:29:14.696184 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:29:14.721687 kubelet[2905]: E0416 02:29:14.704177 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:14.721687 kubelet[2905]: E0416 02:29:14.715422 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:29:14.931327 containerd[1572]: time="2026-04-16T02:29:14.930507817Z" level=info msg="RemoveContainer for \"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\"" Apr 16 02:29:15.071981 containerd[1572]: time="2026-04-16T02:29:15.071022459Z" level=info msg="RemoveContainer for \"e5ffbd1a8c805f2613862aecceabc7ba015d03aa0e02fe5e990f1085340add77\" returns successfully" Apr 16 02:29:15.173371 kubelet[2905]: E0416 02:29:15.168829 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:15.984280 systemd[1]: Started sshd@70-10.0.0.34:22-10.0.0.1:58098.service - OpenSSH per-connection server daemon (10.0.0.1:58098). Apr 16 02:29:17.296742 systemd[1]: cri-containerd-846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4.scope: Deactivated successfully. Apr 16 02:29:17.306963 systemd[1]: cri-containerd-846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4.scope: Consumed 29.137s CPU time, 37.2M memory peak, 1.8M read from disk, 4K written to disk. Apr 16 02:29:17.414029 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:17.504227 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:17.798860 systemd-logind[1559]: New session 71 of user core. Apr 16 02:29:17.834433 containerd[1572]: time="2026-04-16T02:29:17.821178253Z" level=info msg="received container exit event container_id:\"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" id:\"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" pid:3303 exit_status:1 exited_at:{seconds:1776306557 nanos:720344438}" Apr 16 02:29:18.019421 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 16 02:29:18.529971 containerd[1572]: time="2026-04-16T02:29:18.527527671Z" level=warning msg="container event discarded" container=76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0 type=CONTAINER_CREATED_EVENT Apr 16 02:29:19.850897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4-rootfs.mount: Deactivated successfully. Apr 16 02:29:20.307798 kubelet[2905]: E0416 02:29:20.288538 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.176s" Apr 16 02:29:21.218933 kubelet[2905]: I0416 02:29:21.215870 2905 scope.go:117] "RemoveContainer" containerID="846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4" Apr 16 02:29:21.234131 kubelet[2905]: E0416 02:29:21.233857 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:21.413534 containerd[1572]: time="2026-04-16T02:29:21.413061050Z" level=info msg="CreateContainer within sandbox \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Apr 16 02:29:21.540045 containerd[1572]: time="2026-04-16T02:29:21.539274282Z" level=info msg="Container 57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:29:21.784978 containerd[1572]: time="2026-04-16T02:29:21.781874406Z" level=info msg="CreateContainer within sandbox \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\"" Apr 16 02:29:21.882775 containerd[1572]: time="2026-04-16T02:29:21.878344525Z" level=info msg="StartContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\"" Apr 16 02:29:21.903945 containerd[1572]: time="2026-04-16T02:29:21.903325639Z" level=info msg="connecting to shim 57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d" address="unix:///run/containerd/s/660c9f17e10a82e31ec8d7c1d115eeb12baa8d21aa12e4080ec56b1037d0d02a" protocol=ttrpc version=3 Apr 16 02:29:22.246650 kubelet[2905]: E0416 02:29:22.221455 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:23.246900 kubelet[2905]: E0416 02:29:23.244292 2905 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.155s" Apr 16 02:29:23.249164 systemd[1]: Started cri-containerd-57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d.scope - libcontainer container 57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d. Apr 16 02:29:23.976170 sshd[6113]: Connection closed by 10.0.0.1 port 58098 Apr 16 02:29:24.021777 sshd-session[6109]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:24.273781 systemd[1]: sshd@70-10.0.0.34:22-10.0.0.1:58098.service: Deactivated successfully. Apr 16 02:29:24.441895 systemd[1]: session-71.scope: Deactivated successfully. Apr 16 02:29:24.445755 systemd[1]: session-71.scope: Consumed 2.690s CPU time, 17.9M memory peak. Apr 16 02:29:24.481890 systemd-logind[1559]: Session 71 logged out. Waiting for processes to exit. Apr 16 02:29:24.500059 systemd-logind[1559]: Removed session 71. Apr 16 02:29:24.776025 kubelet[2905]: I0416 02:29:24.775248 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:29:24.802141 kubelet[2905]: E0416 02:29:24.778921 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:24.802141 kubelet[2905]: E0416 02:29:24.787199 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:29:25.493871 containerd[1572]: time="2026-04-16T02:29:25.493457843Z" level=error msg="get state for 57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d" error="context deadline exceeded" Apr 16 02:29:25.493871 containerd[1572]: time="2026-04-16T02:29:25.493708176Z" level=warning msg="unknown status" status=0 Apr 16 02:29:25.895730 containerd[1572]: time="2026-04-16T02:29:25.838428847Z" level=warning msg="container event discarded" container=76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0 type=CONTAINER_STARTED_EVENT Apr 16 02:29:27.120983 containerd[1572]: time="2026-04-16T02:29:27.120567843Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 02:29:28.169935 containerd[1572]: time="2026-04-16T02:29:28.169410487Z" level=info msg="StartContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" returns successfully" Apr 16 02:29:29.274756 systemd[1]: Started sshd@71-10.0.0.34:22-10.0.0.1:58412.service - OpenSSH per-connection server daemon (10.0.0.1:58412). Apr 16 02:29:29.810039 kubelet[2905]: E0416 02:29:29.809646 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:29.959610 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 58412 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:29.963626 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:30.000179 systemd-logind[1559]: New session 72 of user core. Apr 16 02:29:30.014655 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 16 02:29:30.837374 kubelet[2905]: E0416 02:29:30.833844 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:31.258299 sshd[6173]: Connection closed by 10.0.0.1 port 58412 Apr 16 02:29:31.269189 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:31.295972 systemd[1]: sshd@71-10.0.0.34:22-10.0.0.1:58412.service: Deactivated successfully. Apr 16 02:29:31.453187 systemd[1]: session-72.scope: Deactivated successfully. Apr 16 02:29:31.473414 systemd-logind[1559]: Session 72 logged out. Waiting for processes to exit. Apr 16 02:29:31.502196 systemd-logind[1559]: Removed session 72. Apr 16 02:29:32.180229 kubelet[2905]: E0416 02:29:32.179578 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:36.452901 systemd[1]: Started sshd@72-10.0.0.34:22-10.0.0.1:41850.service - OpenSSH per-connection server daemon (10.0.0.1:41850). Apr 16 02:29:36.902258 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 41850 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:36.904434 sshd-session[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:36.966351 systemd-logind[1559]: New session 73 of user core. Apr 16 02:29:37.042840 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 16 02:29:38.230729 kubelet[2905]: I0416 02:29:38.228473 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:29:38.230729 kubelet[2905]: E0416 02:29:38.228620 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:38.230729 kubelet[2905]: E0416 02:29:38.230044 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:29:38.443220 sshd[6193]: Connection closed by 10.0.0.1 port 41850 Apr 16 02:29:38.446423 sshd-session[6190]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:38.517178 systemd[1]: sshd@72-10.0.0.34:22-10.0.0.1:41850.service: Deactivated successfully. Apr 16 02:29:38.540879 systemd[1]: session-73.scope: Deactivated successfully. Apr 16 02:29:38.544640 systemd-logind[1559]: Session 73 logged out. Waiting for processes to exit. Apr 16 02:29:38.548477 systemd-logind[1559]: Removed session 73. Apr 16 02:29:41.138503 kubelet[2905]: E0416 02:29:41.125260 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:43.593590 systemd[1]: Started sshd@73-10.0.0.34:22-10.0.0.1:41852.service - OpenSSH per-connection server daemon (10.0.0.1:41852). Apr 16 02:29:43.950833 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 41852 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:43.959333 sshd-session[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:44.079929 systemd-logind[1559]: New session 74 of user core. Apr 16 02:29:44.173883 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 16 02:29:45.434395 sshd[6212]: Connection closed by 10.0.0.1 port 41852 Apr 16 02:29:45.435419 sshd-session[6209]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:45.462443 systemd[1]: sshd@73-10.0.0.34:22-10.0.0.1:41852.service: Deactivated successfully. Apr 16 02:29:45.479871 systemd[1]: session-74.scope: Deactivated successfully. Apr 16 02:29:45.484815 systemd-logind[1559]: Session 74 logged out. Waiting for processes to exit. Apr 16 02:29:45.492851 systemd-logind[1559]: Removed session 74. Apr 16 02:29:48.118951 kubelet[2905]: E0416 02:29:48.117031 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:49.133418 kubelet[2905]: I0416 02:29:49.132448 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:29:49.157327 kubelet[2905]: E0416 02:29:49.136294 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:29:49.168409 kubelet[2905]: E0416 02:29:49.163452 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:29:50.926344 systemd[1]: Started sshd@74-10.0.0.34:22-10.0.0.1:60606.service - OpenSSH per-connection server daemon (10.0.0.1:60606). Apr 16 02:29:51.993805 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 60606 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:29:52.018059 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:29:52.378796 systemd-logind[1559]: New session 75 of user core. Apr 16 02:29:52.568901 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 16 02:29:58.613282 sshd[6230]: Connection closed by 10.0.0.1 port 60606 Apr 16 02:29:58.623971 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Apr 16 02:29:58.722378 systemd[1]: sshd@74-10.0.0.34:22-10.0.0.1:60606.service: Deactivated successfully. Apr 16 02:29:58.831842 systemd[1]: session-75.scope: Deactivated successfully. Apr 16 02:29:58.835862 systemd[1]: session-75.scope: Consumed 4.453s CPU time, 17.6M memory peak. Apr 16 02:29:58.907726 systemd-logind[1559]: Session 75 logged out. Waiting for processes to exit. Apr 16 02:29:58.929879 systemd-logind[1559]: Removed session 75. Apr 16 02:30:04.017445 systemd[1]: Started sshd@75-10.0.0.34:22-10.0.0.1:43918.service - OpenSSH per-connection server daemon (10.0.0.1:43918). Apr 16 02:30:04.117022 kubelet[2905]: I0416 02:30:04.102733 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:30:04.117022 kubelet[2905]: E0416 02:30:04.104173 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:04.117022 kubelet[2905]: E0416 02:30:04.105449 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:30:05.295192 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 43918 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:05.332637 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:05.591199 containerd[1572]: time="2026-04-16T02:30:05.582947882Z" level=warning msg="container event discarded" container=279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe type=CONTAINER_CREATED_EVENT Apr 16 02:30:05.602483 systemd-logind[1559]: New session 76 of user core. Apr 16 02:30:05.742516 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 16 02:30:07.139271 kubelet[2905]: E0416 02:30:07.135210 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:08.726675 sshd[6250]: Connection closed by 10.0.0.1 port 43918 Apr 16 02:30:08.727464 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:08.821462 systemd[1]: sshd@75-10.0.0.34:22-10.0.0.1:43918.service: Deactivated successfully. Apr 16 02:30:08.861011 systemd[1]: session-76.scope: Deactivated successfully. Apr 16 02:30:08.861812 systemd[1]: session-76.scope: Consumed 2.248s CPU time, 17.7M memory peak. Apr 16 02:30:08.867945 systemd-logind[1559]: Session 76 logged out. Waiting for processes to exit. Apr 16 02:30:08.878008 systemd-logind[1559]: Removed session 76. Apr 16 02:30:09.800761 containerd[1572]: time="2026-04-16T02:30:09.759386156Z" level=warning msg="container event discarded" container=279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe type=CONTAINER_STARTED_EVENT Apr 16 02:30:14.062934 systemd[1]: Started sshd@76-10.0.0.34:22-10.0.0.1:35314.service - OpenSSH per-connection server daemon (10.0.0.1:35314). Apr 16 02:30:15.026863 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 35314 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:15.076501 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:15.365989 systemd-logind[1559]: New session 77 of user core. Apr 16 02:30:15.447348 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 16 02:30:18.118015 kubelet[2905]: I0416 02:30:18.117497 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:30:18.130423 kubelet[2905]: E0416 02:30:18.130054 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:18.143477 kubelet[2905]: E0416 02:30:18.140997 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:30:19.018346 containerd[1572]: time="2026-04-16T02:30:19.016933342Z" level=warning msg="container event discarded" container=76167cd60ab8af5d9f54d46a980f549259fb4db162f3c5393d4ff88d1f9d70c0 type=CONTAINER_STOPPED_EVENT Apr 16 02:30:19.691733 sshd[6270]: Connection closed by 10.0.0.1 port 35314 Apr 16 02:30:19.695166 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:19.828211 systemd[1]: sshd@76-10.0.0.34:22-10.0.0.1:35314.service: Deactivated successfully. Apr 16 02:30:19.907682 systemd[1]: session-77.scope: Deactivated successfully. Apr 16 02:30:19.910527 systemd[1]: session-77.scope: Consumed 3.047s CPU time, 17.9M memory peak. Apr 16 02:30:20.019415 containerd[1572]: time="2026-04-16T02:30:20.018930710Z" level=warning msg="container event discarded" container=096375f1c5bc85b7b9e1ed40579f83b84f462f40328cea21cd309a6a5a80b4dd type=CONTAINER_DELETED_EVENT Apr 16 02:30:20.030693 systemd-logind[1559]: Session 77 logged out. Waiting for processes to exit. Apr 16 02:30:20.062976 systemd-logind[1559]: Removed session 77. Apr 16 02:30:24.792382 systemd[1]: Started sshd@77-10.0.0.34:22-10.0.0.1:39276.service - OpenSSH per-connection server daemon (10.0.0.1:39276). Apr 16 02:30:25.826372 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 39276 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:25.943719 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:26.165625 systemd-logind[1559]: New session 78 of user core. Apr 16 02:30:26.185337 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 16 02:30:28.742120 sshd[6286]: Connection closed by 10.0.0.1 port 39276 Apr 16 02:30:28.743135 sshd-session[6283]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:28.763748 systemd-logind[1559]: Session 78 logged out. Waiting for processes to exit. Apr 16 02:30:28.770464 systemd[1]: sshd@77-10.0.0.34:22-10.0.0.1:39276.service: Deactivated successfully. Apr 16 02:30:28.842866 systemd[1]: session-78.scope: Deactivated successfully. Apr 16 02:30:28.875889 systemd[1]: session-78.scope: Consumed 2.160s CPU time, 17.9M memory peak. Apr 16 02:30:28.904133 systemd-logind[1559]: Removed session 78. Apr 16 02:30:30.086755 kubelet[2905]: I0416 02:30:30.086228 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:30:30.093189 kubelet[2905]: E0416 02:30:30.089701 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:30.093189 kubelet[2905]: E0416 02:30:30.089937 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:30:33.105924 kubelet[2905]: E0416 02:30:33.103835 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:34.003993 systemd[1]: Started sshd@78-10.0.0.34:22-10.0.0.1:51664.service - OpenSSH per-connection server daemon (10.0.0.1:51664). Apr 16 02:30:34.748699 sshd[6299]: Accepted publickey for core from 10.0.0.1 port 51664 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:34.758311 sshd-session[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:34.829925 systemd-logind[1559]: New session 79 of user core. Apr 16 02:30:35.019159 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 16 02:30:35.893572 sshd[6302]: Connection closed by 10.0.0.1 port 51664 Apr 16 02:30:35.895865 sshd-session[6299]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:35.907505 systemd[1]: sshd@78-10.0.0.34:22-10.0.0.1:51664.service: Deactivated successfully. Apr 16 02:30:35.917979 systemd[1]: session-79.scope: Deactivated successfully. Apr 16 02:30:35.932942 systemd-logind[1559]: Session 79 logged out. Waiting for processes to exit. Apr 16 02:30:35.938054 systemd-logind[1559]: Removed session 79. Apr 16 02:30:41.380237 systemd[1]: Started sshd@79-10.0.0.34:22-10.0.0.1:35866.service - OpenSSH per-connection server daemon (10.0.0.1:35866). Apr 16 02:30:42.227685 sshd[6318]: Accepted publickey for core from 10.0.0.1 port 35866 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:42.236064 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:42.363129 systemd-logind[1559]: New session 80 of user core. Apr 16 02:30:42.378787 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 16 02:30:43.087314 kubelet[2905]: E0416 02:30:43.086969 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:43.089331 kubelet[2905]: I0416 02:30:43.086993 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:30:43.089331 kubelet[2905]: E0416 02:30:43.088060 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:43.089331 kubelet[2905]: E0416 02:30:43.088963 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:30:44.171039 kubelet[2905]: E0416 02:30:44.146693 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:44.758863 sshd[6321]: Connection closed by 10.0.0.1 port 35866 Apr 16 02:30:44.764723 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:44.838442 systemd[1]: sshd@79-10.0.0.34:22-10.0.0.1:35866.service: Deactivated successfully. Apr 16 02:30:44.958958 systemd[1]: session-80.scope: Deactivated successfully. Apr 16 02:30:44.960190 systemd[1]: session-80.scope: Consumed 1.725s CPU time, 16.1M memory peak. Apr 16 02:30:44.986964 systemd-logind[1559]: Session 80 logged out. Waiting for processes to exit. Apr 16 02:30:45.016832 systemd-logind[1559]: Removed session 80. Apr 16 02:30:47.138678 kubelet[2905]: E0416 02:30:47.137780 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:48.097788 kubelet[2905]: E0416 02:30:48.096457 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:50.201713 systemd[1]: Started sshd@80-10.0.0.34:22-10.0.0.1:46390.service - OpenSSH per-connection server daemon (10.0.0.1:46390). Apr 16 02:30:50.928226 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:30:50.999227 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:30:51.212676 systemd-logind[1559]: New session 81 of user core. Apr 16 02:30:51.390581 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 16 02:30:56.175652 kubelet[2905]: I0416 02:30:56.175268 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:30:56.175652 kubelet[2905]: E0416 02:30:56.175773 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:30:56.182350 kubelet[2905]: E0416 02:30:56.178333 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:30:56.362957 sshd[6337]: Connection closed by 10.0.0.1 port 46390 Apr 16 02:30:56.367761 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Apr 16 02:30:56.488415 systemd[1]: sshd@80-10.0.0.34:22-10.0.0.1:46390.service: Deactivated successfully. Apr 16 02:30:56.543996 systemd[1]: session-81.scope: Deactivated successfully. Apr 16 02:30:56.546726 systemd[1]: session-81.scope: Consumed 3.562s CPU time, 16M memory peak. Apr 16 02:30:56.578327 systemd-logind[1559]: Session 81 logged out. Waiting for processes to exit. Apr 16 02:30:56.622142 systemd-logind[1559]: Removed session 81. Apr 16 02:30:57.108532 kubelet[2905]: E0416 02:30:57.106315 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:01.598588 systemd[1]: Started sshd@81-10.0.0.34:22-10.0.0.1:42496.service - OpenSSH per-connection server daemon (10.0.0.1:42496). Apr 16 02:31:02.236755 sshd[6355]: Accepted publickey for core from 10.0.0.1 port 42496 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:02.248869 sshd-session[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:02.471034 systemd-logind[1559]: New session 82 of user core. Apr 16 02:31:02.619121 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 16 02:31:06.623282 sshd[6358]: Connection closed by 10.0.0.1 port 42496 Apr 16 02:31:06.626716 sshd-session[6355]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:06.746998 systemd[1]: sshd@81-10.0.0.34:22-10.0.0.1:42496.service: Deactivated successfully. Apr 16 02:31:06.801516 systemd[1]: session-82.scope: Deactivated successfully. Apr 16 02:31:06.806797 systemd[1]: session-82.scope: Consumed 2.514s CPU time, 15.8M memory peak. Apr 16 02:31:06.841076 systemd-logind[1559]: Session 82 logged out. Waiting for processes to exit. Apr 16 02:31:06.933833 systemd-logind[1559]: Removed session 82. Apr 16 02:31:11.082918 kubelet[2905]: I0416 02:31:11.082684 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:31:11.082918 kubelet[2905]: E0416 02:31:11.082955 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:11.082918 kubelet[2905]: E0416 02:31:11.083067 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:31:11.763590 systemd[1]: Started sshd@82-10.0.0.34:22-10.0.0.1:40308.service - OpenSSH per-connection server daemon (10.0.0.1:40308). Apr 16 02:31:12.203811 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:12.248912 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:12.412939 systemd-logind[1559]: New session 83 of user core. Apr 16 02:31:12.498032 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 16 02:31:13.593814 sshd[6377]: Connection closed by 10.0.0.1 port 40308 Apr 16 02:31:13.595870 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:13.608036 systemd[1]: sshd@82-10.0.0.34:22-10.0.0.1:40308.service: Deactivated successfully. Apr 16 02:31:13.618378 systemd[1]: session-83.scope: Deactivated successfully. Apr 16 02:31:13.620894 systemd-logind[1559]: Session 83 logged out. Waiting for processes to exit. Apr 16 02:31:13.624117 systemd-logind[1559]: Removed session 83. Apr 16 02:31:18.748059 systemd[1]: Started sshd@83-10.0.0.34:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Apr 16 02:31:19.172492 sshd[6392]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:19.188045 sshd-session[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:19.280412 systemd-logind[1559]: New session 84 of user core. Apr 16 02:31:19.315066 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 16 02:31:20.093313 kubelet[2905]: E0416 02:31:20.085073 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:20.558337 sshd[6395]: Connection closed by 10.0.0.1 port 57048 Apr 16 02:31:20.556932 sshd-session[6392]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:20.597184 systemd[1]: sshd@83-10.0.0.34:22-10.0.0.1:57048.service: Deactivated successfully. Apr 16 02:31:20.642437 systemd[1]: session-84.scope: Deactivated successfully. Apr 16 02:31:20.644778 systemd-logind[1559]: Session 84 logged out. Waiting for processes to exit. Apr 16 02:31:20.649939 systemd-logind[1559]: Removed session 84. Apr 16 02:31:25.637445 systemd[1]: Started sshd@84-10.0.0.34:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Apr 16 02:31:26.117995 kubelet[2905]: I0416 02:31:26.117831 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:31:26.124788 kubelet[2905]: E0416 02:31:26.119375 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:26.124788 kubelet[2905]: E0416 02:31:26.119524 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:31:26.163184 sshd[6409]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:26.174457 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:26.212081 systemd-logind[1559]: New session 85 of user core. Apr 16 02:31:26.253294 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 16 02:31:27.308283 sshd[6412]: Connection closed by 10.0.0.1 port 49526 Apr 16 02:31:27.313633 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:27.355360 systemd[1]: sshd@84-10.0.0.34:22-10.0.0.1:49526.service: Deactivated successfully. Apr 16 02:31:27.453776 systemd[1]: session-85.scope: Deactivated successfully. Apr 16 02:31:27.459443 systemd-logind[1559]: Session 85 logged out. Waiting for processes to exit. Apr 16 02:31:27.467449 systemd-logind[1559]: Removed session 85. Apr 16 02:31:32.437521 systemd[1]: Started sshd@85-10.0.0.34:22-10.0.0.1:49530.service - OpenSSH per-connection server daemon (10.0.0.1:49530). Apr 16 02:31:32.737370 sshd[6427]: Accepted publickey for core from 10.0.0.1 port 49530 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:32.746297 sshd-session[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:32.824394 systemd-logind[1559]: New session 86 of user core. Apr 16 02:31:32.835811 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 16 02:31:34.073271 sshd[6430]: Connection closed by 10.0.0.1 port 49530 Apr 16 02:31:34.074286 sshd-session[6427]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:34.088282 systemd[1]: sshd@85-10.0.0.34:22-10.0.0.1:49530.service: Deactivated successfully. Apr 16 02:31:34.099202 systemd[1]: session-86.scope: Deactivated successfully. Apr 16 02:31:34.128527 systemd-logind[1559]: Session 86 logged out. Waiting for processes to exit. Apr 16 02:31:34.144992 systemd-logind[1559]: Removed session 86. Apr 16 02:31:39.114520 systemd[1]: Started sshd@86-10.0.0.34:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Apr 16 02:31:39.452829 sshd[6443]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:39.465186 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:39.513955 systemd-logind[1559]: New session 87 of user core. Apr 16 02:31:39.527977 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 16 02:31:40.878942 sshd[6447]: Connection closed by 10.0.0.1 port 35594 Apr 16 02:31:40.884155 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:40.923472 systemd[1]: sshd@86-10.0.0.34:22-10.0.0.1:35594.service: Deactivated successfully. Apr 16 02:31:40.953694 systemd[1]: session-87.scope: Deactivated successfully. Apr 16 02:31:40.992743 systemd-logind[1559]: Session 87 logged out. Waiting for processes to exit. Apr 16 02:31:41.013764 systemd-logind[1559]: Removed session 87. Apr 16 02:31:41.099013 kubelet[2905]: I0416 02:31:41.098261 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:31:41.099013 kubelet[2905]: E0416 02:31:41.098364 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:41.099013 kubelet[2905]: E0416 02:31:41.098480 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 16 02:31:45.987319 systemd[1]: Started sshd@87-10.0.0.34:22-10.0.0.1:58546.service - OpenSSH per-connection server daemon (10.0.0.1:58546). Apr 16 02:31:46.491721 sshd[6462]: Accepted publickey for core from 10.0.0.1 port 58546 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:46.543172 containerd[1572]: time="2026-04-16T02:31:46.542499406Z" level=warning msg="container event discarded" container=44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3 type=CONTAINER_CREATED_EVENT Apr 16 02:31:46.546280 sshd-session[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:46.702724 systemd-logind[1559]: New session 88 of user core. Apr 16 02:31:46.758218 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 16 02:31:47.937809 sshd[6465]: Connection closed by 10.0.0.1 port 58546 Apr 16 02:31:47.960185 sshd-session[6462]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:48.068204 systemd[1]: sshd@87-10.0.0.34:22-10.0.0.1:58546.service: Deactivated successfully. Apr 16 02:31:48.084108 systemd[1]: session-88.scope: Deactivated successfully. Apr 16 02:31:48.106211 systemd-logind[1559]: Session 88 logged out. Waiting for processes to exit. Apr 16 02:31:48.119929 systemd-logind[1559]: Removed session 88. Apr 16 02:31:48.515105 containerd[1572]: time="2026-04-16T02:31:48.514603477Z" level=warning msg="container event discarded" container=44e5b13a002fb7cf7395dc5ed693d924dd47a219d0cad02d18d48754864f98e3 type=CONTAINER_STARTED_EVENT Apr 16 02:31:49.084087 kubelet[2905]: E0416 02:31:49.083670 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:52.997430 systemd[1]: Started sshd@88-10.0.0.34:22-10.0.0.1:58558.service - OpenSSH per-connection server daemon (10.0.0.1:58558). Apr 16 02:31:53.090881 kubelet[2905]: E0416 02:31:53.089628 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:53.223632 sshd[6478]: Accepted publickey for core from 10.0.0.1 port 58558 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:31:53.228603 sshd-session[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:31:53.271262 systemd-logind[1559]: New session 89 of user core. Apr 16 02:31:53.301277 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 16 02:31:54.190872 kubelet[2905]: I0416 02:31:54.189339 2905 scope.go:117] "RemoveContainer" containerID="279aa03a168df933e031c50599d78326f48af3041a58235c0352f5fa108fe1fe" Apr 16 02:31:54.202585 kubelet[2905]: E0416 02:31:54.197791 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:54.236989 containerd[1572]: time="2026-04-16T02:31:54.236754576Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}" Apr 16 02:31:54.328787 containerd[1572]: time="2026-04-16T02:31:54.324400413Z" level=info msg="Container f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:31:54.512932 containerd[1572]: time="2026-04-16T02:31:54.512721813Z" level=info msg="CreateContainer within sandbox \"8dd48dced690d6fbae48e1700d55885898be94f870bbabeb883e644269c69454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e\"" Apr 16 02:31:54.550665 containerd[1572]: time="2026-04-16T02:31:54.550337889Z" level=info msg="StartContainer for \"f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e\"" Apr 16 02:31:54.622007 containerd[1572]: time="2026-04-16T02:31:54.620307051Z" level=info msg="connecting to shim f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e" address="unix:///run/containerd/s/19fb7b3958679c24ac66e8dd57527f0cf6dd433ec0ccb7dc7514e788b8b7a005" protocol=ttrpc version=3 Apr 16 02:31:54.817790 systemd[1]: Started cri-containerd-f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e.scope - libcontainer container f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e. Apr 16 02:31:55.033711 sshd[6481]: Connection closed by 10.0.0.1 port 58558 Apr 16 02:31:55.037444 sshd-session[6478]: pam_unix(sshd:session): session closed for user core Apr 16 02:31:55.081724 systemd[1]: sshd@88-10.0.0.34:22-10.0.0.1:58558.service: Deactivated successfully. Apr 16 02:31:55.110885 systemd[1]: session-89.scope: Deactivated successfully. Apr 16 02:31:55.130812 systemd-logind[1559]: Session 89 logged out. Waiting for processes to exit. Apr 16 02:31:55.135577 systemd-logind[1559]: Removed session 89. Apr 16 02:31:55.200513 containerd[1572]: time="2026-04-16T02:31:55.200100487Z" level=info msg="StartContainer for \"f6b20675b29f8f68a34b7d6742739771632555f0277e2e63e0ace1e21cded20e\" returns successfully" Apr 16 02:31:55.814132 kubelet[2905]: E0416 02:31:55.810253 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:31:56.093015 kubelet[2905]: E0416 02:31:56.092342 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:00.068629 systemd[1]: Started sshd@89-10.0.0.34:22-10.0.0.1:48514.service - OpenSSH per-connection server daemon (10.0.0.1:48514). Apr 16 02:32:00.165694 sshd[6530]: Accepted publickey for core from 10.0.0.1 port 48514 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:00.171649 sshd-session[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:00.201384 systemd-logind[1559]: New session 90 of user core. Apr 16 02:32:00.217121 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 16 02:32:00.535770 sshd[6533]: Connection closed by 10.0.0.1 port 48514 Apr 16 02:32:00.536836 sshd-session[6530]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:00.545075 systemd[1]: sshd@89-10.0.0.34:22-10.0.0.1:48514.service: Deactivated successfully. Apr 16 02:32:00.549477 systemd[1]: session-90.scope: Deactivated successfully. Apr 16 02:32:00.550353 systemd-logind[1559]: Session 90 logged out. Waiting for processes to exit. Apr 16 02:32:00.551632 systemd-logind[1559]: Removed session 90. Apr 16 02:32:03.724749 kubelet[2905]: E0416 02:32:03.724528 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:05.614487 systemd[1]: Started sshd@90-10.0.0.34:22-10.0.0.1:42846.service - OpenSSH per-connection server daemon (10.0.0.1:42846). Apr 16 02:32:05.676344 sshd[6546]: Accepted publickey for core from 10.0.0.1 port 42846 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:05.677945 sshd-session[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:05.687203 systemd-logind[1559]: New session 91 of user core. Apr 16 02:32:05.696735 systemd[1]: Started session-91.scope - Session 91 of User core. Apr 16 02:32:05.995149 sshd[6549]: Connection closed by 10.0.0.1 port 42846 Apr 16 02:32:05.996386 sshd-session[6546]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:06.003146 systemd[1]: sshd@90-10.0.0.34:22-10.0.0.1:42846.service: Deactivated successfully. Apr 16 02:32:06.005327 systemd[1]: session-91.scope: Deactivated successfully. Apr 16 02:32:06.005985 systemd-logind[1559]: Session 91 logged out. Waiting for processes to exit. Apr 16 02:32:06.007087 systemd-logind[1559]: Removed session 91. Apr 16 02:32:07.084995 kubelet[2905]: E0416 02:32:07.084715 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:08.083732 kubelet[2905]: E0416 02:32:08.082762 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:11.025918 systemd[1]: Started sshd@91-10.0.0.34:22-10.0.0.1:42852.service - OpenSSH per-connection server daemon (10.0.0.1:42852). Apr 16 02:32:11.245823 sshd[6564]: Accepted publickey for core from 10.0.0.1 port 42852 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:11.247776 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:11.303410 systemd-logind[1559]: New session 92 of user core. Apr 16 02:32:11.334324 systemd[1]: Started session-92.scope - Session 92 of User core. Apr 16 02:32:11.707010 sshd[6567]: Connection closed by 10.0.0.1 port 42852 Apr 16 02:32:11.708894 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:11.756116 systemd[1]: sshd@91-10.0.0.34:22-10.0.0.1:42852.service: Deactivated successfully. Apr 16 02:32:11.777392 systemd[1]: session-92.scope: Deactivated successfully. Apr 16 02:32:11.781078 systemd-logind[1559]: Session 92 logged out. Waiting for processes to exit. Apr 16 02:32:11.789626 systemd-logind[1559]: Removed session 92. Apr 16 02:32:12.082403 kubelet[2905]: E0416 02:32:12.082142 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:13.696094 kubelet[2905]: E0416 02:32:13.695855 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:16.838103 systemd[1]: Started sshd@92-10.0.0.34:22-10.0.0.1:41220.service - OpenSSH per-connection server daemon (10.0.0.1:41220). Apr 16 02:32:17.181602 sshd[6582]: Accepted publickey for core from 10.0.0.1 port 41220 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:17.185033 sshd-session[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:17.343039 systemd-logind[1559]: New session 93 of user core. Apr 16 02:32:17.395829 systemd[1]: Started session-93.scope - Session 93 of User core. Apr 16 02:32:18.243080 sshd[6585]: Connection closed by 10.0.0.1 port 41220 Apr 16 02:32:18.244681 sshd-session[6582]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:18.259513 systemd[1]: sshd@92-10.0.0.34:22-10.0.0.1:41220.service: Deactivated successfully. Apr 16 02:32:18.267073 systemd[1]: session-93.scope: Deactivated successfully. Apr 16 02:32:18.269450 systemd-logind[1559]: Session 93 logged out. Waiting for processes to exit. Apr 16 02:32:18.270514 systemd-logind[1559]: Removed session 93. Apr 16 02:32:23.266902 systemd[1]: Started sshd@93-10.0.0.34:22-10.0.0.1:41222.service - OpenSSH per-connection server daemon (10.0.0.1:41222). Apr 16 02:32:23.383808 sshd[6598]: Accepted publickey for core from 10.0.0.1 port 41222 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:23.385442 sshd-session[6598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:23.391809 systemd-logind[1559]: New session 94 of user core. Apr 16 02:32:23.404961 systemd[1]: Started session-94.scope - Session 94 of User core. Apr 16 02:32:23.713359 sshd[6601]: Connection closed by 10.0.0.1 port 41222 Apr 16 02:32:23.714891 sshd-session[6598]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:23.735011 systemd[1]: sshd@93-10.0.0.34:22-10.0.0.1:41222.service: Deactivated successfully. Apr 16 02:32:23.739853 systemd[1]: session-94.scope: Deactivated successfully. Apr 16 02:32:23.741910 systemd-logind[1559]: Session 94 logged out. Waiting for processes to exit. Apr 16 02:32:23.749740 systemd[1]: Started sshd@94-10.0.0.34:22-10.0.0.1:41230.service - OpenSSH per-connection server daemon (10.0.0.1:41230). Apr 16 02:32:23.750755 systemd-logind[1559]: Removed session 94. Apr 16 02:32:23.843496 sshd[6614]: Accepted publickey for core from 10.0.0.1 port 41230 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:23.910051 sshd-session[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:23.942322 systemd-logind[1559]: New session 95 of user core. Apr 16 02:32:23.961909 systemd[1]: Started session-95.scope - Session 95 of User core. Apr 16 02:32:27.396991 containerd[1572]: time="2026-04-16T02:32:27.396392201Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:32:27.462108 containerd[1572]: time="2026-04-16T02:32:27.461884456Z" level=info msg="StopContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" with timeout 30 (s)" Apr 16 02:32:27.493792 containerd[1572]: time="2026-04-16T02:32:27.492390161Z" level=info msg="Stop container \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" with signal terminated" Apr 16 02:32:27.539909 containerd[1572]: time="2026-04-16T02:32:27.539268098Z" level=info msg="StopContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" with timeout 2 (s)" Apr 16 02:32:27.547752 containerd[1572]: time="2026-04-16T02:32:27.547299163Z" level=info msg="Stop container \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" with signal terminated" Apr 16 02:32:27.647408 systemd[1]: cri-containerd-57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d.scope: Deactivated successfully. Apr 16 02:32:27.694754 systemd[1]: cri-containerd-57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d.scope: Consumed 8.295s CPU time, 35.7M memory peak, 1.3M read from disk, 4K written to disk. Apr 16 02:32:27.710818 containerd[1572]: time="2026-04-16T02:32:27.705985034Z" level=info msg="received container exit event container_id:\"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" id:\"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" pid:6147 exited_at:{seconds:1776306747 nanos:704930244}" Apr 16 02:32:27.777350 systemd-networkd[1492]: lxc_health: Link DOWN Apr 16 02:32:27.781836 systemd-networkd[1492]: lxc_health: Lost carrier Apr 16 02:32:27.817220 systemd[1]: cri-containerd-3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2.scope: Deactivated successfully. Apr 16 02:32:27.817701 systemd[1]: cri-containerd-3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2.scope: Consumed 3min 11.532s CPU time, 130.6M memory peak, 576K read from disk, 13.3M written to disk. Apr 16 02:32:27.831800 containerd[1572]: time="2026-04-16T02:32:27.831767529Z" level=info msg="received container exit event container_id:\"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" id:\"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" pid:3724 exited_at:{seconds:1776306747 nanos:831292749}" Apr 16 02:32:28.075937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d-rootfs.mount: Deactivated successfully. Apr 16 02:32:28.097480 containerd[1572]: time="2026-04-16T02:32:28.097099418Z" level=info msg="StopContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" returns successfully" Apr 16 02:32:28.107187 containerd[1572]: time="2026-04-16T02:32:28.107129499Z" level=info msg="StopPodSandbox for \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\"" Apr 16 02:32:28.111021 containerd[1572]: time="2026-04-16T02:32:28.110489542Z" level=info msg="Container to stop \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.111021 containerd[1572]: time="2026-04-16T02:32:28.110994290Z" level=info msg="Container to stop \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.161983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2-rootfs.mount: Deactivated successfully. Apr 16 02:32:28.162641 containerd[1572]: time="2026-04-16T02:32:28.162181165Z" level=info msg="StopContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" returns successfully" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163290308Z" level=info msg="StopPodSandbox for \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\"" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163841546Z" level=info msg="Container to stop \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163920911Z" level=info msg="Container to stop \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163930146Z" level=info msg="Container to stop \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163937892Z" level=info msg="Container to stop \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.165853 containerd[1572]: time="2026-04-16T02:32:28.163951015Z" level=info msg="Container to stop \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:32:28.167868 systemd[1]: cri-containerd-4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c.scope: Deactivated successfully. Apr 16 02:32:28.173135 containerd[1572]: time="2026-04-16T02:32:28.172092072Z" level=info msg="received sandbox exit event container_id:\"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" id:\"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" exit_status:137 exited_at:{seconds:1776306748 nanos:169742315}" monitor_name=podsandbox Apr 16 02:32:28.201602 containerd[1572]: time="2026-04-16T02:32:28.201368615Z" level=info msg="received sandbox exit event container_id:\"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" id:\"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" exit_status:137 exited_at:{seconds:1776306748 nanos:200623344}" monitor_name=podsandbox Apr 16 02:32:28.201460 systemd[1]: cri-containerd-51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6.scope: Deactivated successfully. Apr 16 02:32:28.299725 sshd[6617]: Connection closed by 10.0.0.1 port 41230 Apr 16 02:32:28.307097 sshd-session[6614]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:28.319649 systemd[1]: sshd@94-10.0.0.34:22-10.0.0.1:41230.service: Deactivated successfully. Apr 16 02:32:28.331321 systemd[1]: session-95.scope: Deactivated successfully. Apr 16 02:32:28.334318 systemd[1]: session-95.scope: Consumed 1.399s CPU time, 26.7M memory peak. Apr 16 02:32:28.339837 systemd-logind[1559]: Session 95 logged out. Waiting for processes to exit. Apr 16 02:32:28.346868 systemd[1]: Started sshd@95-10.0.0.34:22-10.0.0.1:48124.service - OpenSSH per-connection server daemon (10.0.0.1:48124). Apr 16 02:32:28.357809 systemd-logind[1559]: Removed session 95. Apr 16 02:32:28.358996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c-rootfs.mount: Deactivated successfully. Apr 16 02:32:28.368975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6-rootfs.mount: Deactivated successfully. Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.372865665Z" level=info msg="shim disconnected" id=51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6 namespace=k8s.io Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.372912150Z" level=warning msg="cleaning up after shim disconnected" id=51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6 namespace=k8s.io Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.372918005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.374679009Z" level=info msg="shim disconnected" id=4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c namespace=k8s.io Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.374695443Z" level=warning msg="cleaning up after shim disconnected" id=4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c namespace=k8s.io Apr 16 02:32:28.375529 containerd[1572]: time="2026-04-16T02:32:28.374701048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:32:28.416825 containerd[1572]: time="2026-04-16T02:32:28.413898927Z" level=info msg="TearDown network for sandbox \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" successfully" Apr 16 02:32:28.416239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6-shm.mount: Deactivated successfully. Apr 16 02:32:28.429958 containerd[1572]: time="2026-04-16T02:32:28.427469020Z" level=info msg="StopPodSandbox for \"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" returns successfully" Apr 16 02:32:28.438691 containerd[1572]: time="2026-04-16T02:32:28.434825738Z" level=info msg="received sandbox container exit event sandbox_id:\"51bc6159d38438f66c99a4ba80afc6277a5f0ca81c1042fe7f53d511a1597ae6\" exit_status:137 exited_at:{seconds:1776306748 nanos:200623344}" monitor_name=criService Apr 16 02:32:28.462002 sshd[6732]: Accepted publickey for core from 10.0.0.1 port 48124 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:28.466591 sshd-session[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:28.541780 systemd-logind[1559]: New session 96 of user core. Apr 16 02:32:28.563748 systemd[1]: Started session-96.scope - Session 96 of User core. Apr 16 02:32:28.565061 containerd[1572]: time="2026-04-16T02:32:28.565009493Z" level=info msg="TearDown network for sandbox \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" successfully" Apr 16 02:32:28.565061 containerd[1572]: time="2026-04-16T02:32:28.565032444Z" level=info msg="StopPodSandbox for \"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" returns successfully" Apr 16 02:32:28.565297 containerd[1572]: time="2026-04-16T02:32:28.565270451Z" level=info msg="received sandbox container exit event sandbox_id:\"4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c\" exit_status:137 exited_at:{seconds:1776306748 nanos:169742315}" monitor_name=criService Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.640326 2905 scope.go:117] "RemoveContainer" containerID="57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d" Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643105 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hostproc\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643230 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643316 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-kernel\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643329 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-cgroup\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643340 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-xtables-lock\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.645056 kubelet[2905]: I0416 02:32:28.643352 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643363 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qst\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-kube-api-access-n8qst\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643372 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-net\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643383 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-lib-modules\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643422 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-config-path\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643433 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cni-path\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646172 kubelet[2905]: I0416 02:32:28.643444 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-bpf-maps\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646346 kubelet[2905]: I0416 02:32:28.643454 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-etc-cni-netd\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646346 kubelet[2905]: I0416 02:32:28.643464 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-run\") pod \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\" (UID: \"7fa6eadd-c61c-46c9-a233-f61300b39bd5\") " Apr 16 02:32:28.646346 kubelet[2905]: I0416 02:32:28.643637 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646346 kubelet[2905]: I0416 02:32:28.643709 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646346 kubelet[2905]: I0416 02:32:28.644478 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646454 kubelet[2905]: I0416 02:32:28.644499 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646454 kubelet[2905]: I0416 02:32:28.644510 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646454 kubelet[2905]: I0416 02:32:28.644519 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646454 kubelet[2905]: I0416 02:32:28.645515 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.646454 kubelet[2905]: I0416 02:32:28.645535 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.699080 kubelet[2905]: I0416 02:32:28.648121 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:32:28.740759 kubelet[2905]: I0416 02:32:28.740440 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.741399 kubelet[2905]: I0416 02:32:28.741384 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:32:28.742169 kubelet[2905]: I0416 02:32:28.742133 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 02:32:28.754035 containerd[1572]: time="2026-04-16T02:32:28.751773071Z" level=info msg="RemoveContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.754847 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-226fg\" (UniqueName: \"kubernetes.io/projected/01b7e1ef-3931-4b46-8f70-ce88202dc972-kube-api-access-226fg\") pod \"01b7e1ef-3931-4b46-8f70-ce88202dc972\" (UID: \"01b7e1ef-3931-4b46-8f70-ce88202dc972\") " Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.754984 2905 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path\") pod \"01b7e1ef-3931-4b46-8f70-ce88202dc972\" (UID: \"01b7e1ef-3931-4b46-8f70-ce88202dc972\") " Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755052 2905 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755061 2905 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755068 2905 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755078 2905 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755085 2905 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fa6eadd-c61c-46c9-a233-f61300b39bd5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.755086 kubelet[2905]: I0416 02:32:28.755091 2905 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755099 2905 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755105 2905 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755110 2905 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755115 2905 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755119 2905 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.757135 kubelet[2905]: I0416 02:32:28.755125 2905 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fa6eadd-c61c-46c9-a233-f61300b39bd5-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.770887 kubelet[2905]: I0416 02:32:28.770053 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:32:28.773499 containerd[1572]: time="2026-04-16T02:32:28.772968551Z" level=info msg="RemoveContainer for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" returns successfully" Apr 16 02:32:28.776801 kubelet[2905]: I0416 02:32:28.774715 2905 scope.go:117] "RemoveContainer" containerID="846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4" Apr 16 02:32:28.783421 kubelet[2905]: I0416 02:32:28.780273 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-kube-api-access-n8qst" (OuterVolumeSpecName: "kube-api-access-n8qst") pod "7fa6eadd-c61c-46c9-a233-f61300b39bd5" (UID: "7fa6eadd-c61c-46c9-a233-f61300b39bd5"). InnerVolumeSpecName "kube-api-access-n8qst". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:32:28.789981 kubelet[2905]: I0416 02:32:28.783440 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01b7e1ef-3931-4b46-8f70-ce88202dc972" (UID: "01b7e1ef-3931-4b46-8f70-ce88202dc972"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:32:28.847592 containerd[1572]: time="2026-04-16T02:32:28.847292590Z" level=info msg="RemoveContainer for \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\"" Apr 16 02:32:28.960294 containerd[1572]: time="2026-04-16T02:32:28.957299980Z" level=info msg="RemoveContainer for \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" returns successfully" Apr 16 02:32:28.962332 kubelet[2905]: I0416 02:32:28.941177 2905 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01b7e1ef-3931-4b46-8f70-ce88202dc972-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.962332 kubelet[2905]: I0416 02:32:28.942393 2905 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.962332 kubelet[2905]: I0416 02:32:28.942986 2905 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8qst\" (UniqueName: \"kubernetes.io/projected/7fa6eadd-c61c-46c9-a233-f61300b39bd5-kube-api-access-n8qst\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:28.962332 kubelet[2905]: I0416 02:32:28.960066 2905 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b7e1ef-3931-4b46-8f70-ce88202dc972-kube-api-access-226fg" (OuterVolumeSpecName: "kube-api-access-226fg") pod "01b7e1ef-3931-4b46-8f70-ce88202dc972" (UID: "01b7e1ef-3931-4b46-8f70-ce88202dc972"). InnerVolumeSpecName "kube-api-access-226fg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:32:28.962332 kubelet[2905]: I0416 02:32:28.960339 2905 scope.go:117] "RemoveContainer" containerID="57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d" Apr 16 02:32:28.977490 containerd[1572]: time="2026-04-16T02:32:28.961266797Z" level=error msg="ContainerStatus for \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\": not found" Apr 16 02:32:28.980237 kubelet[2905]: E0416 02:32:28.962958 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\": not found" containerID="57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d" Apr 16 02:32:28.980237 kubelet[2905]: I0416 02:32:28.962987 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d"} err="failed to get container status \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\": rpc error: code = NotFound desc = an error occurred when try to find container \"57216663a67dcb8d9f6d81a622779a0f15811031e8c13240c52098ee238f607d\": not found" Apr 16 02:32:28.980237 kubelet[2905]: I0416 02:32:28.963057 2905 scope.go:117] "RemoveContainer" containerID="846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4" Apr 16 02:32:28.984654 containerd[1572]: time="2026-04-16T02:32:28.980464128Z" level=error msg="ContainerStatus for \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\": not found" Apr 16 02:32:28.985237 kubelet[2905]: E0416 02:32:28.985123 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\": not found" containerID="846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4" Apr 16 02:32:28.985309 kubelet[2905]: I0416 02:32:28.985256 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4"} err="failed to get container status \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"846c917acf8b28ff450fb793c6016b0800802cadbaf81fb2b777650827d8d5c4\": not found" Apr 16 02:32:28.985309 kubelet[2905]: I0416 02:32:28.985287 2905 scope.go:117] "RemoveContainer" containerID="3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2" Apr 16 02:32:29.005750 systemd[1]: Removed slice kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice - libcontainer container kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice. Apr 16 02:32:29.006453 systemd[1]: kubepods-burstable-pod7fa6eadd_c61c_46c9_a233_f61300b39bd5.slice: Consumed 3min 13.070s CPU time, 130.9M memory peak, 1004K read from disk, 15.5M written to disk. Apr 16 02:32:29.011914 containerd[1572]: time="2026-04-16T02:32:29.007951462Z" level=info msg="RemoveContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\"" Apr 16 02:32:29.060880 kubelet[2905]: I0416 02:32:29.058478 2905 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-226fg\" (UniqueName: \"kubernetes.io/projected/01b7e1ef-3931-4b46-8f70-ce88202dc972-kube-api-access-226fg\") on node \"localhost\" DevicePath \"\"" Apr 16 02:32:29.071282 containerd[1572]: time="2026-04-16T02:32:29.070996526Z" level=info msg="RemoveContainer for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" returns successfully" Apr 16 02:32:29.083484 kubelet[2905]: I0416 02:32:29.083363 2905 scope.go:117] "RemoveContainer" containerID="768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7" Apr 16 02:32:29.101159 systemd[1]: var-lib-kubelet-pods-7fa6eadd\x2dc61c\x2d46c9\x2da233\x2df61300b39bd5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 02:32:29.105236 systemd[1]: var-lib-kubelet-pods-7fa6eadd\x2dc61c\x2d46c9\x2da233\x2df61300b39bd5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 02:32:29.105427 systemd[1]: var-lib-kubelet-pods-7fa6eadd\x2dc61c\x2d46c9\x2da233\x2df61300b39bd5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8qst.mount: Deactivated successfully. Apr 16 02:32:29.105529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f745095516c67339604ac923f5509e967c840ba426c82463d544ba06f31b42c-shm.mount: Deactivated successfully. Apr 16 02:32:29.108406 systemd[1]: var-lib-kubelet-pods-01b7e1ef\x2d3931\x2d4b46\x2d8f70\x2dce88202dc972-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d226fg.mount: Deactivated successfully. Apr 16 02:32:29.258946 containerd[1572]: time="2026-04-16T02:32:29.249138215Z" level=info msg="RemoveContainer for \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\"" Apr 16 02:32:29.345062 containerd[1572]: time="2026-04-16T02:32:29.340510188Z" level=info msg="RemoveContainer for \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" returns successfully" Apr 16 02:32:29.444255 kubelet[2905]: I0416 02:32:29.444026 2905 scope.go:117] "RemoveContainer" containerID="b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a" Apr 16 02:32:29.541662 containerd[1572]: time="2026-04-16T02:32:29.536104197Z" level=info msg="RemoveContainer for \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\"" Apr 16 02:32:29.621838 containerd[1572]: time="2026-04-16T02:32:29.621187049Z" level=info msg="RemoveContainer for \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" returns successfully" Apr 16 02:32:29.697832 kubelet[2905]: I0416 02:32:29.697124 2905 scope.go:117] "RemoveContainer" containerID="1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563" Apr 16 02:32:29.760887 containerd[1572]: time="2026-04-16T02:32:29.760384925Z" level=info msg="RemoveContainer for \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\"" Apr 16 02:32:29.841490 containerd[1572]: time="2026-04-16T02:32:29.807281946Z" level=info msg="RemoveContainer for \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" returns successfully" Apr 16 02:32:29.881068 kubelet[2905]: I0416 02:32:29.880492 2905 scope.go:117] "RemoveContainer" containerID="8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17" Apr 16 02:32:29.929490 containerd[1572]: time="2026-04-16T02:32:29.929357079Z" level=info msg="RemoveContainer for \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\"" Apr 16 02:32:30.035344 containerd[1572]: time="2026-04-16T02:32:30.034860347Z" level=info msg="RemoveContainer for \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" returns successfully" Apr 16 02:32:30.036236 kubelet[2905]: I0416 02:32:30.035863 2905 scope.go:117] "RemoveContainer" containerID="3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2" Apr 16 02:32:30.039783 containerd[1572]: time="2026-04-16T02:32:30.038706797Z" level=error msg="ContainerStatus for \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\": not found" Apr 16 02:32:30.048956 kubelet[2905]: E0416 02:32:30.048760 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\": not found" containerID="3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2" Apr 16 02:32:30.087944 kubelet[2905]: I0416 02:32:30.087420 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2"} err="failed to get container status \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e840f5f93b1a83b27967852ce6c8217ed2c01896322b7400a784a2e878db4b2\": not found" Apr 16 02:32:30.094419 kubelet[2905]: I0416 02:32:30.091674 2905 scope.go:117] "RemoveContainer" containerID="768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7" Apr 16 02:32:30.113283 containerd[1572]: time="2026-04-16T02:32:30.111951785Z" level=error msg="ContainerStatus for \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\": not found" Apr 16 02:32:30.128154 kubelet[2905]: E0416 02:32:30.127598 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\": not found" containerID="768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7" Apr 16 02:32:30.134179 kubelet[2905]: I0416 02:32:30.129696 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7"} err="failed to get container status \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"768a1e169ca270cf4d25772399ffcb31ef991431a1792ff55a775990ba8477d7\": not found" Apr 16 02:32:30.134179 kubelet[2905]: I0416 02:32:30.129864 2905 scope.go:117] "RemoveContainer" containerID="b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a" Apr 16 02:32:30.140707 containerd[1572]: time="2026-04-16T02:32:30.138884757Z" level=error msg="ContainerStatus for \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\": not found" Apr 16 02:32:30.142038 kubelet[2905]: E0416 02:32:30.141940 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\": not found" containerID="b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a" Apr 16 02:32:30.142154 kubelet[2905]: I0416 02:32:30.142043 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a"} err="failed to get container status \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8ebea9821f08c978bb103d87eb5c60882bffb49fb2d64017d102fd1e354896a\": not found" Apr 16 02:32:30.142154 kubelet[2905]: I0416 02:32:30.142077 2905 scope.go:117] "RemoveContainer" containerID="1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563" Apr 16 02:32:30.147625 containerd[1572]: time="2026-04-16T02:32:30.147248232Z" level=error msg="ContainerStatus for \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\": not found" Apr 16 02:32:30.165583 kubelet[2905]: E0416 02:32:30.165312 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\": not found" containerID="1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563" Apr 16 02:32:30.168043 kubelet[2905]: I0416 02:32:30.167833 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563"} err="failed to get container status \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b456b1495e1e2a7694a70cf069933c990844a90dba6c85cf66bd8414ed31563\": not found" Apr 16 02:32:30.169247 kubelet[2905]: I0416 02:32:30.169127 2905 scope.go:117] "RemoveContainer" containerID="8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17" Apr 16 02:32:30.170149 containerd[1572]: time="2026-04-16T02:32:30.170039929Z" level=error msg="ContainerStatus for \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\": not found" Apr 16 02:32:30.172775 kubelet[2905]: E0416 02:32:30.172581 2905 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\": not found" containerID="8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17" Apr 16 02:32:30.181986 kubelet[2905]: I0416 02:32:30.181183 2905 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17"} err="failed to get container status \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e3e98f97a54e951a3bdcc7dff65d9694c416f4f0a560e4397a793e13c74cd17\": not found" Apr 16 02:32:30.330531 systemd[1]: Removed slice kubepods-besteffort-pod01b7e1ef_3931_4b46_8f70_ce88202dc972.slice - libcontainer container kubepods-besteffort-pod01b7e1ef_3931_4b46_8f70_ce88202dc972.slice. Apr 16 02:32:30.335166 systemd[1]: kubepods-besteffort-pod01b7e1ef_3931_4b46_8f70_ce88202dc972.slice: Consumed 37.808s CPU time, 37.9M memory peak, 3.1M read from disk, 8K written to disk. Apr 16 02:32:31.248939 kubelet[2905]: E0416 02:32:31.248684 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:32:32.085408 kubelet[2905]: I0416 02:32:32.085257 2905 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b7e1ef-3931-4b46-8f70-ce88202dc972" path="/var/lib/kubelet/pods/01b7e1ef-3931-4b46-8f70-ce88202dc972/volumes" Apr 16 02:32:32.086082 kubelet[2905]: I0416 02:32:32.085824 2905 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa6eadd-c61c-46c9-a233-f61300b39bd5" path="/var/lib/kubelet/pods/7fa6eadd-c61c-46c9-a233-f61300b39bd5/volumes" Apr 16 02:32:32.373610 sshd[6764]: Connection closed by 10.0.0.1 port 48124 Apr 16 02:32:32.378078 sshd-session[6732]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:32.392012 systemd[1]: sshd@95-10.0.0.34:22-10.0.0.1:48124.service: Deactivated successfully. Apr 16 02:32:32.396843 systemd[1]: session-96.scope: Deactivated successfully. Apr 16 02:32:32.397163 systemd[1]: session-96.scope: Consumed 2.469s CPU time, 24.6M memory peak. Apr 16 02:32:32.398397 systemd-logind[1559]: Session 96 logged out. Waiting for processes to exit. Apr 16 02:32:32.400917 systemd[1]: Started sshd@96-10.0.0.34:22-10.0.0.1:48140.service - OpenSSH per-connection server daemon (10.0.0.1:48140). Apr 16 02:32:32.405945 systemd-logind[1559]: Removed session 96. Apr 16 02:32:32.493288 systemd[1]: Created slice kubepods-burstable-podc6196852_f788_4bcb_b606_c58437c304a6.slice - libcontainer container kubepods-burstable-podc6196852_f788_4bcb_b606_c58437c304a6.slice. Apr 16 02:32:32.548241 kubelet[2905]: I0416 02:32:32.548000 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-lib-modules\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550745 kubelet[2905]: I0416 02:32:32.549653 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6196852-f788-4bcb-b606-c58437c304a6-cilium-ipsec-secrets\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550745 kubelet[2905]: I0416 02:32:32.549696 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zfd\" (UniqueName: \"kubernetes.io/projected/c6196852-f788-4bcb-b606-c58437c304a6-kube-api-access-d9zfd\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550745 kubelet[2905]: I0416 02:32:32.549746 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6196852-f788-4bcb-b606-c58437c304a6-clustermesh-secrets\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550745 kubelet[2905]: I0416 02:32:32.549757 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-host-proc-sys-kernel\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550745 kubelet[2905]: I0416 02:32:32.549775 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-cilium-cgroup\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549790 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-xtables-lock\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549801 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-host-proc-sys-net\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549812 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-hostproc\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549822 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-cni-path\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549846 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-etc-cni-netd\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550847 kubelet[2905]: I0416 02:32:32.549912 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6196852-f788-4bcb-b606-c58437c304a6-cilium-config-path\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550945 kubelet[2905]: I0416 02:32:32.549926 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-bpf-maps\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550945 kubelet[2905]: I0416 02:32:32.549938 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6196852-f788-4bcb-b606-c58437c304a6-hubble-tls\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.550945 kubelet[2905]: I0416 02:32:32.549948 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6196852-f788-4bcb-b606-c58437c304a6-cilium-run\") pod \"cilium-554lg\" (UID: \"c6196852-f788-4bcb-b606-c58437c304a6\") " pod="kube-system/cilium-554lg" Apr 16 02:32:32.583757 sshd[6781]: Accepted publickey for core from 10.0.0.1 port 48140 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:32.585030 sshd-session[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:32.607963 systemd-logind[1559]: New session 97 of user core. Apr 16 02:32:32.619531 systemd[1]: Started session-97.scope - Session 97 of User core. Apr 16 02:32:32.644116 sshd[6784]: Connection closed by 10.0.0.1 port 48140 Apr 16 02:32:32.645392 sshd-session[6781]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:32.661155 systemd[1]: sshd@96-10.0.0.34:22-10.0.0.1:48140.service: Deactivated successfully. Apr 16 02:32:32.719907 systemd[1]: session-97.scope: Deactivated successfully. Apr 16 02:32:32.721533 systemd-logind[1559]: Session 97 logged out. Waiting for processes to exit. Apr 16 02:32:32.744788 systemd[1]: Started sshd@97-10.0.0.34:22-10.0.0.1:48156.service - OpenSSH per-connection server daemon (10.0.0.1:48156). Apr 16 02:32:32.746241 systemd-logind[1559]: Removed session 97. Apr 16 02:32:32.819608 sshd[6795]: Accepted publickey for core from 10.0.0.1 port 48156 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 02:32:32.824275 sshd-session[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:32:32.837904 kubelet[2905]: E0416 02:32:32.836874 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:32.842790 containerd[1572]: time="2026-04-16T02:32:32.842601128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-554lg,Uid:c6196852-f788-4bcb-b606-c58437c304a6,Namespace:kube-system,Attempt:0,}" Apr 16 02:32:32.844981 systemd-logind[1559]: New session 98 of user core. Apr 16 02:32:32.854956 systemd[1]: Started session-98.scope - Session 98 of User core. Apr 16 02:32:32.869776 containerd[1572]: time="2026-04-16T02:32:32.869741673Z" level=info msg="connecting to shim d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:32:32.911765 systemd[1]: Started cri-containerd-d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4.scope - libcontainer container d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4. Apr 16 02:32:33.018763 containerd[1572]: time="2026-04-16T02:32:33.018495750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-554lg,Uid:c6196852-f788-4bcb-b606-c58437c304a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\"" Apr 16 02:32:33.026453 kubelet[2905]: E0416 02:32:33.026313 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:33.041640 containerd[1572]: time="2026-04-16T02:32:33.041106455Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 02:32:33.058066 containerd[1572]: time="2026-04-16T02:32:33.057453271Z" level=info msg="Container 831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:32:33.072386 containerd[1572]: time="2026-04-16T02:32:33.072250511Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b\"" Apr 16 02:32:33.078695 containerd[1572]: time="2026-04-16T02:32:33.078033115Z" level=info msg="StartContainer for \"831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b\"" Apr 16 02:32:33.086140 containerd[1572]: time="2026-04-16T02:32:33.085961858Z" level=info msg="connecting to shim 831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" protocol=ttrpc version=3 Apr 16 02:32:33.179120 systemd[1]: Started cri-containerd-831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b.scope - libcontainer container 831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b. Apr 16 02:32:33.275855 containerd[1572]: time="2026-04-16T02:32:33.275794882Z" level=info msg="StartContainer for \"831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b\" returns successfully" Apr 16 02:32:33.287885 systemd[1]: cri-containerd-831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b.scope: Deactivated successfully. Apr 16 02:32:33.293200 containerd[1572]: time="2026-04-16T02:32:33.292108891Z" level=info msg="received container exit event container_id:\"831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b\" id:\"831b3872d75c4ccaa43b30dc92513ee52acd027156b39c720b32c18a9793838b\" pid:6863 exited_at:{seconds:1776306753 nanos:290702538}" Apr 16 02:32:33.360978 kubelet[2905]: E0416 02:32:33.360023 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:34.483835 kubelet[2905]: E0416 02:32:34.483516 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:34.499685 containerd[1572]: time="2026-04-16T02:32:34.499019216Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 02:32:34.523881 containerd[1572]: time="2026-04-16T02:32:34.523532901Z" level=info msg="Container efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:32:34.560267 containerd[1572]: time="2026-04-16T02:32:34.560118261Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183\"" Apr 16 02:32:34.562279 containerd[1572]: time="2026-04-16T02:32:34.562148520Z" level=info msg="StartContainer for \"efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183\"" Apr 16 02:32:34.563703 containerd[1572]: time="2026-04-16T02:32:34.563334664Z" level=info msg="connecting to shim efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" protocol=ttrpc version=3 Apr 16 02:32:34.637721 systemd[1]: Started cri-containerd-efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183.scope - libcontainer container efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183. Apr 16 02:32:34.711061 containerd[1572]: time="2026-04-16T02:32:34.710944790Z" level=info msg="StartContainer for \"efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183\" returns successfully" Apr 16 02:32:34.719811 systemd[1]: cri-containerd-efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183.scope: Deactivated successfully. Apr 16 02:32:34.726274 containerd[1572]: time="2026-04-16T02:32:34.726150484Z" level=info msg="received container exit event container_id:\"efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183\" id:\"efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183\" pid:6913 exited_at:{seconds:1776306754 nanos:724093382}" Apr 16 02:32:34.865329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efe10d990c4a443245f990d2536d029e2f0e25da0adcc714a7bb66afc6387183-rootfs.mount: Deactivated successfully. Apr 16 02:32:35.505586 kubelet[2905]: E0416 02:32:35.505404 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:35.528063 containerd[1572]: time="2026-04-16T02:32:35.527815300Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 02:32:35.641687 containerd[1572]: time="2026-04-16T02:32:35.638527264Z" level=info msg="Container ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:32:35.658071 containerd[1572]: time="2026-04-16T02:32:35.657902485Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd\"" Apr 16 02:32:35.664062 containerd[1572]: time="2026-04-16T02:32:35.663936713Z" level=info msg="StartContainer for \"ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd\"" Apr 16 02:32:35.670820 containerd[1572]: time="2026-04-16T02:32:35.670668033Z" level=info msg="connecting to shim ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" protocol=ttrpc version=3 Apr 16 02:32:35.708728 systemd[1]: Started cri-containerd-ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd.scope - libcontainer container ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd. Apr 16 02:32:35.859526 containerd[1572]: time="2026-04-16T02:32:35.858914690Z" level=info msg="StartContainer for \"ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd\" returns successfully" Apr 16 02:32:35.859463 systemd[1]: cri-containerd-ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd.scope: Deactivated successfully. Apr 16 02:32:35.868644 containerd[1572]: time="2026-04-16T02:32:35.866454069Z" level=info msg="received container exit event container_id:\"ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd\" id:\"ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd\" pid:6958 exited_at:{seconds:1776306755 nanos:865665486}" Apr 16 02:32:35.972484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad9b7f0ac293e4637b35d53fa2fffbda3bcb8ee413c747c784238d082958a2cd-rootfs.mount: Deactivated successfully. Apr 16 02:32:36.257633 kubelet[2905]: E0416 02:32:36.257031 2905 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:32:36.547420 kubelet[2905]: E0416 02:32:36.545977 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:36.561340 containerd[1572]: time="2026-04-16T02:32:36.561064062Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 02:32:36.583519 containerd[1572]: time="2026-04-16T02:32:36.583357757Z" level=info msg="Container bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:32:36.599811 containerd[1572]: time="2026-04-16T02:32:36.599509353Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1\"" Apr 16 02:32:36.604205 containerd[1572]: time="2026-04-16T02:32:36.603692355Z" level=info msg="StartContainer for \"bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1\"" Apr 16 02:32:36.611056 containerd[1572]: time="2026-04-16T02:32:36.610945158Z" level=info msg="connecting to shim bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" protocol=ttrpc version=3 Apr 16 02:32:36.688986 systemd[1]: Started cri-containerd-bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1.scope - libcontainer container bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1. Apr 16 02:32:36.773817 systemd[1]: cri-containerd-bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1.scope: Deactivated successfully. Apr 16 02:32:36.777591 containerd[1572]: time="2026-04-16T02:32:36.777450281Z" level=info msg="received container exit event container_id:\"bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1\" id:\"bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1\" pid:6999 exited_at:{seconds:1776306756 nanos:773739644}" Apr 16 02:32:36.802151 containerd[1572]: time="2026-04-16T02:32:36.801624405Z" level=info msg="StartContainer for \"bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1\" returns successfully" Apr 16 02:32:36.846327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bde56c852f451d03223f1271add79cc6fa0a47c70704e65b618b7ae0dbed4aa1-rootfs.mount: Deactivated successfully. Apr 16 02:32:37.073099 kubelet[2905]: I0416 02:32:37.072119 2905 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T02:32:37Z","lastTransitionTime":"2026-04-16T02:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 02:32:37.560336 kubelet[2905]: E0416 02:32:37.560086 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:37.568940 containerd[1572]: time="2026-04-16T02:32:37.568466647Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 02:32:37.600934 containerd[1572]: time="2026-04-16T02:32:37.600045826Z" level=info msg="Container 20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:32:37.629087 containerd[1572]: time="2026-04-16T02:32:37.628729128Z" level=info msg="CreateContainer within sandbox \"d931d3d0343ec0144b19977a412d32e07c922228f916c88a541fa33b851216d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c\"" Apr 16 02:32:37.634499 containerd[1572]: time="2026-04-16T02:32:37.634330266Z" level=info msg="StartContainer for \"20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c\"" Apr 16 02:32:37.637442 containerd[1572]: time="2026-04-16T02:32:37.637307922Z" level=info msg="connecting to shim 20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c" address="unix:///run/containerd/s/8fd9361f997bfba035d48862bc66a34e863e77bcafaf0e36862b7df98e94e00e" protocol=ttrpc version=3 Apr 16 02:32:37.732220 systemd[1]: Started cri-containerd-20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c.scope - libcontainer container 20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c. Apr 16 02:32:37.823763 containerd[1572]: time="2026-04-16T02:32:37.823138762Z" level=info msg="StartContainer for \"20189ce0cffdcba23a244c7eaf4f7763d7732b4f94148ed5935ed2df2242a11c\" returns successfully" Apr 16 02:32:38.230922 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 16 02:32:38.568840 kubelet[2905]: E0416 02:32:38.568690 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:38.599830 kubelet[2905]: I0416 02:32:38.599412 2905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-554lg" podStartSLOduration=6.599395315 podStartE2EDuration="6.599395315s" podCreationTimestamp="2026-04-16 02:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:32:38.599205281 +0000 UTC m=+1123.891093828" watchObservedRunningTime="2026-04-16 02:32:38.599395315 +0000 UTC m=+1123.891283865" Apr 16 02:32:39.577974 kubelet[2905]: E0416 02:32:39.577729 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:41.084995 kubelet[2905]: E0416 02:32:41.083355 2905 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wmdr2" podUID="224b1be2-a057-4a43-9d23-a0957387a459" Apr 16 02:32:42.667929 systemd-networkd[1492]: lxc_health: Link UP Apr 16 02:32:42.677039 systemd-networkd[1492]: lxc_health: Gained carrier Apr 16 02:32:42.833849 kubelet[2905]: E0416 02:32:42.832805 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:43.087504 kubelet[2905]: E0416 02:32:43.087330 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:43.633037 kubelet[2905]: E0416 02:32:43.632626 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:43.951156 systemd-networkd[1492]: lxc_health: Gained IPv6LL Apr 16 02:32:44.640746 kubelet[2905]: E0416 02:32:44.640224 2905 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:32:49.673364 sshd[6798]: Connection closed by 10.0.0.1 port 48156 Apr 16 02:32:49.676493 sshd-session[6795]: pam_unix(sshd:session): session closed for user core Apr 16 02:32:49.692878 systemd[1]: sshd@97-10.0.0.34:22-10.0.0.1:48156.service: Deactivated successfully. Apr 16 02:32:49.701032 systemd[1]: session-98.scope: Deactivated successfully. Apr 16 02:32:49.701238 systemd[1]: session-98.scope: Consumed 1.111s CPU time, 26M memory peak. Apr 16 02:32:49.702947 systemd-logind[1559]: Session 98 logged out. Waiting for processes to exit. Apr 16 02:32:49.707673 systemd-logind[1559]: Removed session 98.